Publications


Sponsored
  • International study shows that while British cancer survival has improved over the past 20 years the UK’s cancer survival rates lag behind the European average in 9 out of 10 cancers
  • 10,000 cancer deaths could be prevented each year if the UK hit the European average
  • Analysis shows that some British cancer survival rates trail that of developing nations such as Jordan, Puerto Rico, Algeria and Ecuador
  • Since the inception of the NHS in 1948 policy makers and clinicians have viewed the problem as the NHS being under staffed and underfunded
  • But the answers to the cancer care challenge in the UK is not straightforward
  • The global healthcare ecosystem has changed and is continuing to change faster than national policy responses
  • The UK’s cancer care challenges require more innovation not just more reports, more money and more staff
  
UK cancer care lags that of other European nations: reasons and solutions
Part 2

Part 1 of this Commentary  described the CONCORD-3 study reported in the January 2018 edition of The Lancet, which suggested that although 5-year cancer survival rates (the internationally accepted indicator of cancer care) have improved in Britain over the past 2 decades, the UK lags behind most large European countries in cancer care.
 
This is part 2 of the Commentary, which begins by describing some of the UK’s initiatives over the past 20 years to improve cancer mortality rates, speed up diagnoses and enhance the quality of cancer care for people living with the disease. All arrive at similar conclusions: that UK cancer care strategies have reduced cancer mortality rates over time, but there is still more that can be done. They do not compare Britain’s cancer mortality rates with other European nations. Notwithstanding, there appears to be some consensus among leading clinicians and policy makers that the UK’s failure to close the cancer care gap with other European nations is because NHS England is underfunded and understaffed. While this explanation might provide part of the answer it does not tell the whole story. The answer might be less to do with extra funds and extra staff, and more to do with the fact that the global healthcare ecosystem has changed quicker than the thinking of UK policy makers and quicker than structural changes to NHS England. To the extent that this is the case, improving cancer care in Britain may not require more money and more staff, but more innovation and more focus on actual patients’ needs rather than on what policy makers can provide politically.
 
National cancer initiatives: resolving patients’ needs or perpetuating the status quo?
 
Over the past 20 years the UK government has commissioned a number of strategies, taskforces and reports all aimed at improving cancer diagnoses, treatments, and management, and enhancing the quality of life of people living with the disease and reducing premature deaths. In 2000, NHS England launched a National Cancer Plan, which was, “committed to addressing health inequalities through setting new national and local targets for the reduction of smoking rates, the setting of new targets for the reduction of waiting times, the establishment of national standards for cancer services, and investment in specialist palliative care, the expansion and development of the cancer workforce, cancer facilities, and cancer research.” This was followed in 2007 by the Cancer Reform Strategy, which was designed to build, “on the progress made since the publication of the NHS Cancer Plan in 2000, and sets a clear direction for cancer services for the next five years. It shows how by 2012 our cancer services can and should become among the best in the world.”

 
Independent cancer taskforce
 
In January 2015, an Independent Cancer Taskforce was launched by NHS England, “to develop a five-year action plan for cancer services that will improve survival rates and save thousands of lives.” The NHS established the taskforce on behalf of the Care Quality Commission, Health Education England, Monitor,  Public Health England and theTrust Development Authority. The taskforce was chaired by Harpal Kumar, then, CEO of Cancer Research UK, and was comprised of representatives from a cross section of the cancer and healthcare communities.

In July 2015, the Independent Cancer Taskforce published a report entitled: Achieving world-class cancer outcomes: a strategy for England 2015-2020. The report identified key elements of a world class cancer care system and suggested that this is what British cancer patients should expect and what NHS England should aim to provide by 2020. The strategy included, “effective prevention (so that people do not get cancer at all if possible); prompt and accurate diagnosis; informed choice and convenient care; access to the best effective treatments with minimal side effects; always knowing what is going on and why; holistic support; and the best possible quality of life, including at the end of life.” According to the report such a strategy would achieve world-class cancer outcomes and save 30,000 lives a year by 2020.

 
2nd National Cancer Strategy

Two months before the publication of the Taskforce’s report, in May 2015, the UK government launched a National Cancer Strategy. This was its second 5-year program to implement a world-class cancer strategy designed to increase the prevention of cancer, speed up its diagnosis, and improve the experience of people with the condition. It suggested that rapid progress had been made in a number of key and high-impact areas, and stated that, “if someone is diagnosed with cancer, they should be able to live for as long and as well as is possible, regardless of their background or where they live. They should be diagnosed early, so that the most effective treatments are available to them, and they should get the highest quality care and support from the moment cancer is suspected.”

Report of the National Cancer Transformational Board
 
In December 2016, a National Cancer Transformation Board, led by Cally Palmer, the Cancer Director for England, published a number of specific steps to improve cancer care, and reported that over the past decade, 5-year cancer survival rates in the UK have improved across all main cancers, and at the end of 2016, cancer survival rates in Britain were at a record high with 7,000 more people surviving cancer compared to 2013.
You might also be interested in:

CRISPR positioned to eliminate human papilloma viruses that cause cervical cancer
 
 
Interim report of the 2nd National Cancer Strategy

In October 2017, NHS England published an interim report of its 2015 National Cancer Strategy, which suggested that, “Survival rates for cancer have never been higher, and overall patients report a very good experience of care. However, we know there is more we can do to ensure patients are diagnosed early and quickly and that early diagnosis has a major impact on survival. We also know that patients continue to experience variation in their access to care, and this needs to be addressed. Early diagnosis, fast diagnosis and equity of access to treatment and care are central to the ‘National Cancer Programme’ and the transformation of services we want to achieve by 2020-21.” According to an NHS spokesperson, “Figures show that cancer survival is now at an all-time high in England, as a result of better access to screening, funding for effective new treatments and diagnostics and continued action to reduce smoking.”
 
Why cancer mortality rates in Britain lag other European countries
 
If you look at similar European countries the proportion of GDP (Gross Domestic Product) the UK has spent on health in the last 10 to 15 years is low and has increased less than the others,” says Michael Coleman, Professor of Epidemiology and Vital Statistics at the London School of Hygiene & Tropical Medicine and co-author of the international cancer study reported in the March 2018 edition of The Lancet. UK healthcare spending fell from 8.8% of GDP in 2009 - when it averaged 10.1% in leading European countries - to 7.3% in 2014-15. “This difference between the likes of Germany and France is likely to explain some of what we are seeing,” says Coleman and he also suggests that, “The number of medical specialists who deal with these diseases [cancer] tends to be low compared to other similar countries,” [Our emphasis]. Let us examine the relative European healthcare spends and levels of staffing in NHS England.
 
Comparative GDP spends on healthcare

The OECD’s November 2016 Health at a Glance report suggests that in 2013 (the latest year for which data have been published) the UK spent 8.5% of its GDP on public and private healthcare. And, a 2016 report from the King’s Fund, a charity, suggests that projected spending on NHS England as a proportion of the UK’s GDP in 2020-21 is 6.6%, just 0.3% above what it was in 2000.
 
Challenges comparing healthcare spends

Notwithstanding, linking cancer mortality rates to the proportion of GDP nations spend on healthcare is not straightforward. This is partly because of, (i) different nations have different sources of healthcare funding, and (ii) a person’s purchasing power is different in different countries. Fluctuations in relative national economic growth make such comparisons over time and between nations challenging. According to The Health Foundation, a higher percentage of UKhealthcare spending is publicly funded compared to other European countries. For example, “In 2012, publicly funded spending accounted for 84.0% of UK healthcare spending. This is the third highest level in the EU-15 (average: 76.5%).  In 2012, UK public spending on healthcare was slightly higher than the EU-15 average of 7.6% of GDP”. Between 2008 and 2012 the average annual change in healthcare spending per person was lower for the UK than most EU-15 countries, which was largely the result of Greece, Ireland and Portugal making significant cuts to their healthcare spending. The rising prevalence of cancer and other chronic long-term diseases, is a significant driver of increased healthcare costs. According to OEDC data, UK spend on chronic lifetime conditions is similar to the European average. However, the UK spends less than other European countries on pharmaceuticals and out-of-pocket payments. Further, on average, UK patients spend less time in hospital and generally use fewer resources (measured in terms of staff and beds).
 
A 2017 paper published by the Nuffield Trust suggests that, when taking into consideration different sources of healthcare funding and purchasing power parity, the UK’s healthcare spend actually might be keeping up with that of other European nations.
 
NHS “dangerously” understaffed

Let us now consider staffing. In 2017, The Royal College of Emergency Medicine reported that primary and emergency care doctors, which are crucial for the early diagnosis of cancer, were experiencing significant recruitment and retention challenges. According to 2018 figures, NHS England has nearly 100,000 jobs unfilled, which include 35,000 nursing posts and 10,000 doctor vacancies.  The total vacancies represent 1 in 12 of all NHS posts, which is enough to staff about 10 large hospitals. Further, the high number of unfilled NHS posts coincides with 0.25m more people visiting A&E in the first quarter of 2018 than in the equivalent period in 2016. According to Saffron Cordery, the director of policy and strategy for NHS ProvidersThese figures show how the NHS has been pushed to the limit. Despite working at full stretch with around 100,000 vacancies and a real risk of staff burnout, and despite treating 6% more emergency patients, year on year in December (2017), trusts cannot close the gap between what they are being asked to deliver and the funding available”. A February 2018 finance report suggests that NHS England is heading for a £931m deficit in 2018 and is "dangerously" understaffed. This year-on-year deficit was revised to a projected £1.3bn shortfall, which is 88% worse than planned.
 
Reasons for shortages of health professionals

The NHS staffing challenges are aggravated by the fact that British trainee primary care doctors are dwindling, newly qualified doctors are moving abroad, and experienced doctors are retiring early. Over the lifetime of NHS England, the UK has trained significantly fewer healthcare professionals than it needed, and the supply of qualified young British people has consistently outstripped the number of places in medical schools and nurse training. According to data from the General Medical Council (GMC), between 2008 and 2014 an average of 2,852 certificates were issued annually to enable British doctors to work abroad. A 2015 British Medical Association (BMA) poll of 15,560 primary care doctors, found that 34% of respondents plan to retire early because of high stress levels, increasing workloads, and too little time with patients.  Further, it is estimated that 10% of doctors and 7% of nurses employed by NHS England are nationals of other European countries. The uncertainties of Brexit (a term for the potential departure of the UK from the EU) add to NHS’s recruitment and retention challenges of healthcare professionals. According to a 2017 Health Foundation Report, in 2016, more than 2,700 nurses left the NHS; an increase of 68% since 2014.
 
UK policy approach to healthcare shortages has not changed

Notwithstanding, NHS staff shortages are not new. In the 1960s, NHS hospitals in Britain introduced mass recruitment from Commonwealth countries, and this has influenced staffing policies ever since. Being able to recruit doctors and nurses from foreign countries provided NHS England with an “easy” solution to staff shortages. However, over the past 2 decades the global healthcare ecosystem has changed significantly, while UK healthcare staffing policies have not kept pace with the changes. Today, there is a substantial gap globally in the supply and demand of healthcare professionals. Countries such as India, which traditionally could be relied upon to provide healthcare professionals for NHS England, have changed and the pool of potential Indian recruits have shrunk. Over the past 2 decades, the Indian economy has improved and the nation has developed a number of world-class hospital groups such as Apollo, Fortis and Narayana Health, which offer internationally competitive terms and conditions to Indian doctors and nurses. Increasingly Indian hospitals retain more of the nation’s healthcare professionals, and indeed attract doctors working in the UK and the US to return. Further, NHS England has tended to be staffed on the basis of what successive governments can afford rather than what NHS patients’ actually need.
 
Challenges of planning healthcare needs

Although there is a significant shortage of healthcare professionals in NHS England, it is not altogether clear that, (i) significantly increasing the number of NHS health professionals in the short to medium term will be possible, and (ii) simply increasing staff numbers will improve cancer care. Over the past 2 decades, as technologies and demographics have changed, so the demands on cancer professionals have changed. It is not necessarily the case that the NHS has the right mix of staff with the right mix of skills to deal effectively with changing conditions.  Changing traditional roles rather than simply boosting numbers might contribute more to reducing cancer mortality rates and improving the quality of cancer care. Further, it seems reasonable to suggest that, with the aforementioned challenges, a greater proportion of the UK’s annual healthcare spend might be more effective were it directed at cancer prevention rather than “diagnosis and treatment”.
 
Preventing cancer
 
A substantial proportion of cancers can be prevented including cancers caused by tobacco use, heavy consumption of alcohol, and obesity. According to the World Cancer Research Fund about 20% of all cancers diagnosed in the developed world are caused by a combination of excess body weight, physical inactivity, excess alcohol consumption, poor nutrition, and tobacco use, and thus could be prevented. Certain cancers caused by infectious agents such as the human papilloma virus (HPV), hepatitis C, (HCV), and human immunodeficiency virus (HIV) can be prevented by human behavioural changes, vaccination or treatment of the infection. Further, many of the 5m skin cancer cases worldwide (16,000 in the UK), which are diagnosed annually could be prevented by protecting skin from excessive sun exposure and not using indoor tanning machines.
 
Cancer screening
 
Screening is known to reduce the mortality of cancers of the breast, colon, rectum, cervix, and lung. Screening can help colorectal and cervical cancers by allowing for the detection and removal of pre-cancerous lesions. Screening also provides an opportunity for detecting some cancers early when treatment is less expensive and more likely to be successful. Early diagnosis is an important factor in improving cancer outcomes. Currently, the UK offers 3 national screening programs for bowel, breast and cervical cancer. Notwithstanding, recent reports suggest that these programs are not being fully utilised. For example, in 2017 the percentage of women taking up invitations for breast cancer screening was at the lowest level in a decade, dropping to 71%. Over 1.2m women in the UK (25% of the eligible population) did not take up their invitation for cervical screening. Further, a heightened awareness of changes in certain parts of the body, such as the breast, skin, eyes and genitalia may also result in the early detection of cancer.
 
Reconciling bureaucracy with innovation
 
We have described how UK cancer strategies are determined from the top. Cancer care professionals conform to internationally accepted standard processes, which facilitate and reinforce control. ‘Control’ and ‘conformism’ are in the DNA of cancer healthcare professionals and provide the cultural norms of NHS cancer care programs. NHS managers ensure conformance to clinical procedures, medications, targets, budgets, and quality care standards. This describes a classic “bureaucracy”, which is the technology of control and conformism, and the 70-year old command and control structure of NHS England. While control, alignment, discipline and accountability are very important to cancer care programs, innovation is equally important. If NHS England’s cancer mortality rates are to be compatible with those of other European healthcare systems we will have to find a way to reconcile the benefits of bureaucracy - precision, consistency, and predictability - while making the architecture and culture of our cancer care programs more innovative and more compatible with the demands of rapidly evolving 21st century science and technology.
 
Takeaways

Cancer is a vexed and profoundly challenging disorder. As soon as you read about a breakthrough you have news that the cancer has outwitted the scientists, hence the name, “the emperor of all maladies”. Cancer care in the UK has improved, but still the majority of British cancer patients would faire significantly better in other European countries. When reflecting on the myriad of cancer strategies, reports, and taskforces over the past 2 decade you cannot help but think that NHS England suffers from an element of bureaucratic inertia: the inevitable tendency of the NHS to perpetuate its established procedures and modus operandi, even if they do not reduce cancer mortality rates to those experienced by other European nations. The UK policy debate to resolve this problem tends to be dominated by “more”: more money, more doctors, more nurses. Historically this has provided successive governments with a “get-out-of-jail-card” because circumstances meant that the NHS could always provide more. This is not the case today. The global healthcare ecosystem has changed quicker than UK cancer strategies and quicker than structural changes in the nation’s healthcare system. Improving cancer care in the UK will require more than inertia projects. It will require more innovation, more long-term planning, more courage from policy makers, more attention to actual patients’ needs rather than providing what is politically available. The UK healthcare establishment should be minded of Darwin who suggested that, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.”
view in full page

  • 16% of cancers in the UK are linked to excess weight and type-2 diabetes (T2DM)
  • 62% of adults are overweight or obese in England
  • 4m people are living with T2DM in the UK and another 12m are at increased risk of T2DM
  • Prevalence rates of both obesity and T2DM are rising
  • Ineffective prevention initiatives should be replaced with effective ones if we are to dent the vast and escalating burden of obesity, T2DM and related cancers
  • Public health officials, clinicians and charities need to abandon ineffective inertia projects embrace innovation and look to international best practice 

 
Excess weight and type-2 diabetes linked to 16% of cancers in the UK
 
 
Being overweight and living with type-2 diabetes (T2DM) is a potentially deadly combination because it significantly increases your risk of cancer and contributes to the projected increase in cancer cases and deaths in the UK. Findings of a study published in the February 2018 edition of The Lancet Diabetes and Endocrinology suggest that a substantial number of UK cancer cases are linked to a combination of excess body mass index (BMI) and T2DM, which here we refer to as diabesity. To lower the growing burden of cancer associated with diabesity, more effective prevention strategies will be required. To achieve this, clinicians, public health officials and charities will need to reappraise their current projects, innovate, and learn from international best practice. 
 

BMI, obesity and T2DM defined
 
Body mass index (BMI) is a simple index of weight-for-height that is commonly used to classify overweight and obesity in adults. It is a person's weight in kilograms divided by the square of his height in meters (kg/m2). Overweight is a BMI greater than or equal to 25; and obesity is a BMI greater than or equal to 30. T2DM is a long-term metabolic disorder characterized by high blood glucose (sugar), insulin resistance, and relative lack of insulin. Insulin is a hormone produced in the pancreas, which is used by the body to manage glucose levels in the blood and helps the body to use glucose for energy.

 In this Commentary
 
This Commentary describes the findings of a study reported in a 2018 edition of The Lancet Diabetes and Endocrinology, which suggests that current initiatives to prevent and reduce the burden of diabesity are ineffective. Previous Commentaries have described the Mexican Casalud and the Oklahoma City projects, which have successfully reduced obesity and type-2 diabetes (T2DM). These represent innovative international best practice, which have been largely gone unnoticed by the UK’S diabetes establishment. Also, we describe findings of a study published in the May 2017 edition of Scientific Reports, which suggests that although Google trend data can detect early signs of diabetes, they are underutilized by traditional diabetes surveillance models. The prevalence of diabesity in the UK is significant and growing so fast that public health officials, clinicians and charities will have to replace failing inertia projects with more effective ones if they are to dent the growing burden of cancer linked to a combination of obesity and T2DM.
 
The Lancet Diabetes and Endocrinology study
 
A comparative risk assessment study published in The Lancet Diabetes and Endocrinology was carried out by researchers from Imperial College London, Kent University and the World Health Organization. It suggests that in 2012, 5.6% of all cancers worldwide were linked to the combined effect of obesity and diabetes, which corresponded to about 0.8m new cancer cases. 25% of these account for liver cancer in men, and 38% account for endometrial cancer, which affects the lining of the womb in women.
 

Obesity T2DM and cancer
 
There is a close association between obesity and T2DM. The likelihood and severity of T2DM are closely linked with BMI. If you are obese your risk of T2DM is 7-times greater than someone with a healthy weight. If you are overweight your risk of T2DM is 3-times greater. Whilst it is known that the distribution of body fat is a significant determinant of increased risk of T2DM, the precise mechanism of association remains unclear. It is also uncertain why not all people who are obese develop T2DM and why not all people with T2DM are either overweight or obese. Also, the link between obesity and some cancers is well established. More recently, researchers have linked diabetes to several cancers, including liver, pancreatic and breast cancer. The 2018 Lancet Diabetes and Endocrinology study described in this Commentary is the first time anyone has calculated the combined effect of excess BMI and T2DM on cancer worldwide.
 
Findings

According to the Lancet study’s findings, cancers diagnosed in 2012, which are linked to diabesity are almost twice as common in women (496,700 cases) as men (295,900 cases). The combination of excess BMI and T2DM risk factors in women accounts for the highest proportion of breast and endometrial cancer: about 30% and 38% respectively. In men, the combination accounts for the highest proportion of liver and colorectal cancers. Overall, the biggest proportion of cancers linked to diabesity is found in high income western nations, such as the UK (38.2% of 792,600 cancer cases diagnosed in 2012), followed by east and southeast Asia (24.1%). 16.4% of cases of cancer in men and 15% in women in high income western nations are linked to being overweight, compared to 2.7% and 3% respectively in south Asia. Researchers suggest that on current trends, the number of cancers linked to a combination of excess BMI and T2DM could increase by 30% by 2035, which would take the worldwide total of these cancers from 5.6% to 7.35%. 
Uneven prevalence of cancers resulting from diabesity

While cancers associated with diabesity are a relatively small percentage of the total - the global 5.6% masks wide national variations of cancer prevalence resulting from diabesity. For example, in high income western nations, such as the UK, 16% of cancers are linked to excess BMI and T2DM, which suggests a potentially significant trend. As known cancer risk factors such as smoking tobacco have declined in the UK and other wealthy nations, so diabesity has increased as a significant risk factor.
You might also be interested in:

Can the obesity epidemic learn from the way Aids was tackled?


According to Jonathan Pearson-Stuttard,of Imperial College London and lead author of the 2018 Lancet study, the prevalence of cancer linked to excess BMI and diabetes is, “particularly alarming when considering the high and increasing cost of cancer and metabolic diseases. As the prevalence of these cancer risk factors increases, clinical and public health efforts should focus on identifying optimal preventive and screening measures for whole populations and individual patients”.
 
Risks of cancer and their vast and escalating costs

Clinicians, public health officials and charities are mindful of the vast and escalating risks of excess BMI and T2DM on cancer. According to Diabetes UK, 4.5m people are living with diabetes in the UK, 90% of these have T2DM, and another 11.9m are at increased risk of T2DM. Research published in the May 2016 edition of the British Medical Journal reports that prevalent cases of T2DM in the UK more than doubled between 2000 and 2013: from 2.39% to 5.32%, while the number of incident cases increased more steadily.
 
According to a 2014 report by Public Health England entitled “Adult obesity and type-2 diabetes”, the direct annual economic cost of patient care for people living with T2DM in 2011 was £8.8bn; the indirect costs, such as lost production, were about £13bn, and prescribing for diabetes accounted for 9.3% of the total cost of prescribing in 2012-13. The Report concludes, “the rising prevalence of obesity in adults has led, and will continue to lead, to a rise in the prevalence of type 2 diabetes. This is likely to result in increased associated health complications and premature mortality . . . Modelled projections indicate that NHS and wider costs to society associated with overweight, obesity and type 2 diabetes will rise dramatically in the next few decades”.
 
Preventing excess BMI and T2DM as a way to reduce the burden of cancer

Because of the increasing prevalence of diabesity clinicians, healthcare providers and charities have invested substantially in programs to prevent obesity and T2DM. Notwithstanding, the UK’s record of reducing the burden of these disorders is poor. According to the authors of The Lancet study, “Population-based strategies to prevent diabetes and high BMI have great potential impact … but have so far often failed.” Despite an annual NHS spend of £14bn on diabetes care, and over £20m spent annually by Diabetes UK  on “managing diabetes, transforming care, prevention, understanding and support”, over the past 10 years people with diabetes have increased by 60%.
 
Healthier You a national diabetes prevention program

Healthier You, a joint venture between NHS England, Public Health England and Diabetes UK was launched in 2016 and aims to deliver evidence-based behaviour change interventions at scale to people at high risk of T2DM to support them in reducing their risk. In December 2017, an interim analysis of the program’s performance was published in the journal Diabetic Medicine. Findings suggest that Healthier You has achieved higher than anticipated numbers of referrals: 49% as opposed to 40% projected, and the, “characteristics of attendees suggest that the programme is reaching those who are both at greater risk of developing Type 2 diabetes and who typically access healthcare less effectively.”
 
Cautionary note
 
Notwithstanding, the study’s authors conclude with a cautionary note and say that when data become available from the 2019 National Diabetes Audit (NDA) they will be better positioned to assess the program’s performance. Specifically, whether Healthier You participants changed their weight and HbAc1 levels over time. (HbA1c is a blood test that indicates blood glucose levels and is the main way T2DM is diagnosed). We are mindful that earlier National UK Diabetes Audits suggest there are significant challenges associated with incomplete and inconsistent patient data at the primary care level, and also significant variation in diabetes care across the country. It seems reasonable to assume that incomplete and inconsistent data will present analytical challenges.
 
Outcomes as key performance indicators
 
Notwithstanding, the authors of the interim appraisal of Healthier You are right to attempt to link key performance indicators (KPI) with patient outcomes rather than provider activities, which tend to be the preferred performance indicators used by public officials, clinicians and charities engaged in preventing obesity and T2DM. At the population level, there is a dearth of data that associate specific prevention programs with the reduction of the prevalence of obesity and T2DM. Until actual patient outcomes become the key performance indicators, it seems reasonable to suggest that inertia rather than innovation in prevention and care of T2DM and obesity will prevail, and year-on-year the burden of diabesity and associated cancers will continue to increase.
 
Casalud

Two significant and effective innovations to reduce excess BMI and T2DM, which have been largely ignored by the UK’s diabetes establishment are the Casalud and Oklahoma City projects. Casalud is a nation-wide online continuing medical education program launched in Mexico in 2008, which has demonstrated influence on the quality of healthcare, and subsequent influence on patient knowledge, disease self-management, and disease biomarkers. Casalud provides mHealth tools and technical support systems to re-engineer how primary care is delivered in Seguro Popular (Mexico’s equivalent to NHS England) primary health clinics.  By focusing on prevention and using technology, Casalud has increased the number of diabetes screenings and improved clinical infrastructure. An appraisal of the program published in the October 2017 edition of Diabetes, Metabolic Syndrome and Obesity suggests that the Casalud program successfully impacts changes in obesity and T2DM self-management at the primary care level throughout the country.
 
Oklahoma city’s transformation

Oklahoma is a city of about 550,000 people. In 2007, it was dubbed America’s “fast food capital" and “fattest city". A decade later, the city was in the middle of a transformation. While the state still has among the highest adult obesity rates in the nation – climbing from 32.2% to 33.9% between 2012 and 2015 – obesity rates in Oklahoma City dropped from 31.8% to 29.5% during that time frame, according to the US Centers for Disease Control and Prevention data. The city’s transformation started with city’s Mayor Mick Cornett. Cornett, who has been in office since 2004, brought notoriety to the city’s public health efforts beginning at the end of 2007 with the goal to collectively lose 1m pounds. The people of Oklahoma City met that goal in 2012, but have not slowed down their efforts. What began as a campaign to promote healthy eating and exercise became a citywide initiative to, "rebuild the built environment and to build the city around people instead of cars," Cornett says.
 
Underutilized data that detect early people at risk of T2DM
 
Findings of a study published in the May 2017 edition of Scientific Reports suggest an innovative way to improve early diagnosis of excess BMI and T2DM when the diseases are easier and less costly to treat, but so far these data are underutilised. The study reports that increasingly people are searching the Internet to assess their health and records of these activities represent an important source of data about population health and early detection of T2DM. The study based on data from the 2015 Digital Health Record produced by Push Doctor, a UK based online company, which has over 7,000 primary care clinicians available for online video consultations. According to the study, which is based on 61m Google searches and a survey of 1,013 adults, 1 in 5 people chose self-diagnosis online rather than a consultation with their primary care doctor. The study makes use of commercially available geodemographic datasets, which combine marketing records with a number of databases in order to extract T2DM candidate risk variables. It then compares temporal relationships with the search keywords used to describe early symptoms of the T2DM on Google. Researchers suggest that Google Trends can detect early signs of T2DM by monitoring combinations of keywords, associated with searches. Notwithstanding, the value of these data they are underutilized by clinicians, public health officials and charities engaged in reducing the risks of excess BMI and T2DM, which can lead to cancer.
 
Takeaways

Over the past decade, NHS England has spent more than £100bn on diabetes treatment alone, and Diabetes UK has spent some £200m on education and awareness programmes, yet diabetes in the UK has increased by 60%. 90% of diabetes cases are T2DM, which is closely linked to obesity. The combination of excess BMI and T2DM causes some 16% of all cancers in the UK. The burden of these diseases destroys the lives of millions and cost billions. It is imperative that this vast and escalating burden is dented. This will not be achieved if clinicians, public health officials and charities continue with ineffective inertia projects. They will need innovate and embrace best practice if they are to prevent and reduce the vast and escalating burden of excess BMI, T2DM and cancer.
view in full page
  • A 2018 clinical study in China is the first to use CRISPR to edit cells inside the human body in an attempt to eliminate the human papilloma virus (HPV) and is hugely significant for millions of women
  • Nearly all sexually active people get an HPV virus at some point in their lives and persistent high-risk HPV infections are the main cause of cervical cancer
  • Respectively 34,800 and 256,000 women in the UK and US live with cervical cancer and each year about 3,200 and 12,200 new cases of cervical cancer are diagnosed in the UK and US respectively nearly all related to HPV
  • Cervical cancer is increasing in older women not eligible for the HPV vaccine and not availing themselves of Pap test screening programs
  • A new study suggests that cervical cancer mortality among older women could increase by 150% in the next 20 years

CRISPR positioned to eliminate human papilloma viruses that cause cervical cancer

January 2018 marked the beginning of the first CRISPR clinical study to attempt to edit cells while they are in the body of women in the hope to eliminate the human papilloma virus (HPV), which is the main cause of cervical cancer. The study, led by Zheng Hu of the First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China, is the first to edit human cells while inside the body. Zheng Hu will apply a gel that carries the necessary DNA coding for the CRISPR machinery to the cervixes of 60 women between the ages of 18 and 50. The study’s aim is to prevent cervical cancers by targeting and destroying the HPV genes that cause tumor growth while leaving the DNA of normal cells untouched. Current estimates suggest that every year 527,624 women are diagnosed with cervical cancer and 265,672 die from the disease. Zheng Hu’s study is expected to be completed by November 2018 and findings reported in January 2019.
 
In this Commentary

This Commentary describes the Chinese CRISPR study and the etiology and epidemiology of cervical cancer. It also describes the current cervical cancer vaccination possibilities and the challenges they face. Further, the significance of the Chinese study is demonstrated by an English study, published in December 2017 in the Lancet Public Health, which warns that although HPV vaccination programs have significantly reduced the incidence of cervical cancer among young women, the incidence of the disease is increasing significantly among older women who do not qualify for the cervical cancer vaccine, and fail to avail themselves of regular Pap tests (A Pap test is a simple, quick and essentially painless screening procedure for cancer or precancer of the uterine cervix). The latter part of the Commentary describes advances that CRISPR technology has made over the past decade as well as describing its main ethical and technical challenges.
 
Human papilloma virus (HPV)

There are over 200 different types of HPV related viruses. Viruses are the etiological agents of approximately 15% of human cancers worldwide, and high-risk HPVs are responsible for nearly 5% of cancers worldwide. It is estimated that about 75% of the reproductive-age population has been infected with 1 or 2 types of genital HPV. About 79m Americans are currently infected with HPV, and about 14m people become newly infected each year. The American Centers for Disease Control and Prevention estimates that more than 90 and 80% of sexually active American men and women respectively will be infected with at least one type of HPV at some point in their lives. Most HPV infections are harmless, they last no more than 1 to 2 years, and usually the body clears the infections on its own. More than 40 HPV types can be easily spread by anal, oral and vaginal sex. About 12 HPV types are high risk, and it is estimated these persist in only about 1% of women. However, a central component of the association between HPV and cervical carcinogenesis is the ability of HPV to persist in the lower genital tract for long periods without being cleared. These persistent high-risk types of HPV can lead to cell changes, which if untreated, may progress to cancer. Other HPV types are responsible for genital warts, which are not sexually transmitted.
 
Etiology of cervical cancer
 
 “The way that the HPV causes cancer informs us about how cancer occurs in other settings. Virus particles insert foreign DNA into a person’s normal cells. This virus then turns off the “off-switch” and allows the oncogenes [Genes that can transform a cell into a tumor cell] to progress unchecked and create an oncogenic virus. So, in this case the 'insult' is known: it’s an HPV virus. However, in many circumstances we’re not sure what that initial switch is that upsets the balance between a tumor suppressor and an oncogene,” says Whitfield Growdon, of the Massachusetts General Hospital and Professor of Obstetrics, Gynecology and Reproductive Biology at the Harvard University Medical School: see video below:

 
 
HPV and cervical cancer

The association of risk with sexual behavior has been suggested since the mid-19th century, but the central causal role of HPV infection was identified just 40 years ago. HPV infection is the main etiologic agent of cervical cancer. 99% of cervical cancer cases are linked to genital infection with HPV and it is the most common viral infection of the reproductive tract. HPV types 16 and 18 are responsible for about 70% of all cervical cancer cases worldwide. Further, there is growing evidence to suggest that HPV also is a relevant factor in other anogenital cancers (anus, penis, vagina and vulva) as well as head and neck cancers. The importance of prevention and cervical cytological screening was established in the second half of the 20th century, which preceded and even advanced etiologic understanding.
 
Epidemiology of cervical cancer
 
Cervical cancer is one of the most common types of gynecological malignancies worldwide. It ranks as the 4th most frequent cancer among women in the World, and the 2nd most common female cancer in women between 15 and 44. According to the World Health Organization there were some 630m cases of HPV infections in 2012, and 190m of these led to over 0.5m new diagnoses of cervical cancer. The World has a population of some 2,784m women aged 15 and older who are at risk of developing cervical cancer. Each year about 3,200 and 12,200 new cases of cervical cancer are diagnosed in the UK and US respectively; nearly all related to HPV. There is estimated to be 34,800 and 256,000 women in the UK and US respectively living with cervical cancer. Each year some 890 and 4,200 women die from cervical in the UK and US respectively.
 
HPV vaccines
 
HPV vaccines, which prevent certain types of HPV infections, are now available to females up to the age of 26, and have the potential to reduce the incidence of cervical and other anogenital cancers. “Vaccinations work by using your own immune system against foreign pathogens such as viruses and bacteria. Vaccination against some high risk sub-types of cancer-causing HPV viruses is one of the most meaningful interventions we’ve had since the development of the Pap test,” says Growdon: see video below.

 
 
Gardasil and Cervarix

Gardasil, an HPV vaccine developed by Merck & Co., and licenced by the US Food and Drug Administration (FDA) in 2006, was the first HPV vaccine recommended for girls before their 15th birthday, and can also be used for boys. In 2008 Cervarix, an HPV vaccine manufactured by GlaxoSmithKline,  was introduced into the UK’s national immunization program for girls between 12 and 13. Both vaccines have very high efficacy and are equally effective to immunise against HPV types 16 and 18, which are estimated to cause 70% of cervical cancer cases. Both vaccines significantly improve the outlook for cervical cancer among women living in countries where it is routinely administered to girls before they become sexually active. “Both Gardasil and Cervarix vaccines have been shown to be incredibly effective at preventing the development of high-grade dysplasia, which we know, if left unchecked, would turn into cervical cancer,” says Growdon: see video above.

Gardasil also protects against HPV types 6 and 11, which can cause genital warts in both men and women. Second-generation vaccines are under development to broaden protection against HPV. In 2014 the FDA approved Gardasil 9, an enhanced vaccine, which adds protection against an additional 5 HPV types that cause approximately 20% of cervical cancers.
Global challenge

Despite the availability of prophylactic vaccines, HPVs remain a major global health challenge due to inadequate vaccine availability and vaccination coverage. Despite the promise, vaccine uptake has been variable in developed nations, and limited in developing nations, which are most in need. The available vaccines are expensive, require a cold chain to protect their quality, and are administered in 2 to 3 doses spanning several months. Thus, for a variety of practical and societal reasons (e.g., opposition to vaccination of young girls against a sexually transmitted agent, fear of vaccination), coverage, particularly in the US has been lower than would be optimal from a public health perspective.
You might also be interested in:
 

Gene editing battles


Success among young women

Notwithstanding, a study referred to above and published in the Lancet Public Health suggests cervical cancer cases are expected to fall by 75% among young women for whom vaccination is now the norm. Death from cervical cancer among the generation who were 17 or younger in 2008 when the UK vaccination program was introduced is expected to virtually disappear.
 
Challenges for older women

Notwithstanding the success of HPV vaccines for young women, there are continuing challenges for older women who, because of their age, do not qualify for HPV vaccines, and do not attend their Pap screening test when invited. “Pap tests involve scraping the cervix on the outside for cells, which then udergo microscopic examination. Today this is carried out by a computer. Further examination is carried out by a cytopathologist who determines status . . . . . . . . . . Pap tests do not diagnose cancer, but tell you whether you are at high risk of either having pre-cancerous or cancerous cells. Actual diagnosis of cervical cancer involves a colposcopy. This is a simple procedure, which uses a specific type of microscope called a colposcope to look directly into the cervix, magnify its appearance, and helps to take biopsies of abnormal areas,” says Growdon: see videos below.
 

What is a Pap smear test?


Diagnostic tests for cervical cancer
 
Older women and Pap tests

Pap tests, which are offered by NHS England to women between 25 and 64, is the most effective way of preventing cervical cancer; yet data show that in 2016 there was a significant drop in Pap test screening as women’s age increased. If such screening covered 85% of women, it is estimated that it would reduce deaths from cervical cancer by 27% in 5 years, and the diagnosis of new cases of cervical cancer by 14% in 1 year. According to the authors of the 2017 Lancet study, “The risk of acquiring an HPV infection that will progress to cancer has increased in unvaccinated individuals born since 1960, suggesting that current screening coverage is not sufficient to maintain – much less reduce – cervical cancer incidence in the next 20 years.”
 
Cervical cancer projected to increase in older women

Over the next 2 decades, diagnoses of cervical cancer in women between 50 and 64 are projected to increase by 62%, which could increase mortality from the disease by nearly 150%. “The main reason for this is that the population is ageing and women currently 25-40 will not benefit from vaccination – and they are in the age range where the likelihood of getting an HPV infection is quite high,” saidAlejandra Castanon one of the authors of the Lancet study.
 
Chinese study extends CRISPR technology

The Chinese study mentioned above to eliminate the HPV virus employs an innovative extension of CRISPR, which is a ‘game-changing’ technology. Over the past decade CRISPR has become a significant tool for genetic manipulation in biomedical research and biotechnology.  
 
CRISPR and genome editing

CRISPR is a complex system that can recognize and cut DNA sequences in order to provide organisms a strong defence against attacks and make them immune from further assaults. CRISPR has been adapted for both in vitro and in vivo use in eukaryotic cells to perform highly selective gene silencing or editing. Eukaryotic cells are those that contain a nucleus surrounded by a membrane and whose DNA is bound together by proteins into chromosomes.  CRISPRs are specialized stretches of DNA, and "CRISPR-Cas9" provides a powerful tool for precision editing due to its highly efficient targeting of specific DNA sequences in a genome, and has become the standard for genetic editing. Cas9 protein is an enzyme that acts like a pair of molecular scissors capable of cutting strands of DNA. The genomes of organisms encode messages and instructions within their DNA sequences. Genome editing involves changing those sequences, thereby changing the messages. This is achieved by making a break in the DNA, and tricking a cell's natural DNA repair mechanisms to make desired changes; CRISPR-Cas9 provides a means to do this. The technology’s ease of use and low cost have made it popular among the scientific community, and the possibility of its use as a clinical treatment in several genetically derived pathologies has rapidly spread its significance worldwide.
 
Changing ethical concerns

Despite CRISPRS promise there have been significant ethical concerns to genome editing, which center around human germline editing. This is because germline editing entails deliberately changing the genes passed on to children and future generations; in other words, creating genetically modified people. The debate about genome editing is not a new one, but has regained attention following the discovery that CRISPR has the potential to make such editing more accurate and even "easy" in comparison to older technologies. As of 2014, there were about 40 countries that discouraged or banned research on germline editing, including 15 nations in Western Europe. There is also an international effort, launched in December 2015 at the International Summit on Human Gene Editing and led by the US, UK, and China, to harmonize regulation of the application of genome editing technologies. 
 
After initially being opposed to using CRISPR in humans, in June 2016, the US National Institutes of Health advisory panel approved the technology for a study designed to target three types of cancer and funded by the Parker Institute for Cancer Immunotherapy at the University of Pennsylvania. In 2017 the UK approved the use of CRISPR for research in healthy human embryos. 

 
Off-target effects

Soon after scientists reported that CRISPR can edit DNA in 2012, experts raised concerns about “off-target effects,” meaning either CRISPR changes a gene scientist did not want changed or it fails to change a gene that they do. Although CRISPR-Cas9 is known for its precision a study, published in 2017 in the journal Nature Methods, raised concerns that because of the potential for “off-target effects” testing CRISPR in humans may be premature. Non-intended consequenes can happen because one molecule in the CRISPR system acts like a “molecular bloodhound”, searching the genome until it finds a match to its own sequence of  genetic letters; but there are 6bn genetic letters of the human genome, which suggests that there may be more than one match. Scientists anticipate and plan for this by using a computer algorithm to predict where such flaws might occur, then they search those areas to see if such off-target effects did occur. Notwithstanding such procedures and despite CRISPR’s precision, substantial efforts still are required to make the technology a common device safe for human clinical treatments.
 
Advances using CRISPR
 
The first clinical study using CRISPR began in October 2016 at the West China Hospital in Chengdu. Researchers, led by oncologist Lu You from Sichuan University, removed immune cells from the blood of a person with lung cancer, used CRISPR to disable a gene called PD-1, and then returned the cells to the body. This study is part of a much larger CRISPR genome editing revolution. Today, there are about 20 human clinical studies taking place using CRISPR technology most of which are in China. Different studies focus on different cancers including, breast, bladder, oesophageal, kidney, and prostate cancers. Further, a 2017 paper published in the journal Cell describes a number of innovative ways CRISPR being used; including editing cells while inside the body.
 
Takeaways
 
Despite the efficacy of HPV vaccines, immunization against cervical cancer still has significant challenges. Vaccines only target young people before they become sexually active, and are not recommended for slightly older and sexually active women. There is an urgent and growing concern about older women therefore who were not eligible for HPV vaccination, and are not availing themselves of regular Pap tests, and in whom the incidence of cervical cancer is increasing significantly. This makes Zheng Hu’s clinical study extremely important because it holds out the potential to substantially dent this large and rapidly increasing burden of cervical cancer.
view in full page
  • For the first time in medical history scientists have corrected the cause of Huntington’s disease (HD)
  • HD is a fatal congenital neurodegenerative disorder that causes uncontrolled movements, emotional challenges, and loss of cognition
  • Current treatments only help symptoms rather than slow the progression of the disease
  • Researchers from University College London (UCL)havesafely lowered the levels of toxic proteins in the brain that cause HD
  • Experts say this is the biggest breakthrough in neurodegenerative research for 50 years
  • Earlier, an American animal study successfully used a similar technique to “silence” the mutant huntingtin gene in mice brains
  • Gene silencing stops the gene from making any mutant protein but does not eradicate the mutant HD gene
  • More studies are necessary to show whether the UCL study will effectively change the course of HD
  • Gene editing is a game-changer in biomedical research, but it faces significant technical and ethical challenges

Huntington’s disease and gene silencing
 
In December 2017, scientists completed the first human genetic engineering study that targeted the cause of Huntington’s disease (HD) (also known as Huntington’s Chorea), and successfully lowered the level of the harmful huntingtin protein that irreversibly damages the brains of patients suffering from this incurable degenerative condition. Current treatments for HD only help with symptoms, rather than slow the disease’s progression. The study’s leader, Professor Sarah Tabrizi, director of the Huntington’s Disease Centre at University College (UCL) London’s Institute of Neurology, says, “The results of this trial are of ground-breaking importance for Huntington’s disease patients and families”. Tabrizi’s research followed an earlier American study, which successfully used a similar technique to “silence” the mutant huntingtin gene in mice brains.
 
This Commentary describes Huntington’s disease, the 2 studies to silence the huntingtin gene, and also the gene silencing technology, which underlies both studies.
 

Huntington's disease
 
Huntington’s disease (HD) is a fatal congenital neurodegenerative disorder caused by a mutation in the gene of a protein called huntingtin, which triggers the degeneration of cells in the motor control regions of the brain, as well as other areas. HD is one of the most devastating neurodegenerative diseases, which some patients describe as Parkinson’s, Alzheimer’s and Motor Neurone disease rolled into one. HD leads to loss of muscle co-ordination; behavioural abnormalities and cognitive decline. Generally if one parent has HD then each child has a 50% chance of inheriting the disease. HD affects both sexes and about 12 people in 100,000, but appears to be less common in people of Japanese, Chinese, and African descent. If a child does not inherit the huntingtin gene, s/he will not develop the disease and generally cannot pass it to subsequent generations. Although there is a wide variation in its onset age, the majority of HD patients are diagnosed in middle age. Currently there is no cure for the disorder: although drugs exist, which help manage some symptoms they do not influence the progression of the disease.
 
 Signs and symptoms
 
The characteristic symptoms of HD include, cognitive impairment, mood shifts, irritability, depression and behavioural changes. As the disease develops symptoms get progressively worse and include uncontrolled movements, cognitive difficulties and issues with speech and swallowing. HD typically begins between the ages of 30 and 50. An earlier onset form called juvenile HD occurs in people under 20.  Symptoms of juvenile HD differ somewhat from adult onset HD and include unsteadiness, rigidity, difficulties at school, and seizures.  
 
Diagnoses
 
A genetic test, together with a medical history and neurological and laboratory tests, support doctors in their diagnosis of HD. Genetic testing, which costs between US$250 and US$350, is both cost-effective and diagnostically precise, and is important to establish whether HD is present in a family because some other illnesses may be misdiagnosed as HD. The disorder is a model for genetic testing because HD is relatively common, its etiology is understood, and there is significant experience with its management. There are 3 main types ofHD genetic testing: (i) to confirm or rule out the disorder, (ii) pre-symptomatic testing, and (iii) prenatal testing. Persons at risk of HD often seek pre-symptomatic testing to assist in making decisions about marriage, having children, and career. Positive results can evoke significant adverse emotional reactions, so appropriate pre- and post-test counselling is important.
  
Treatment
 
Current treatments can only alleviate the symptoms of HD, and do not delay the onset or slow the progression of the disease. Until the findings of the Tabrizi study there was no treatment that could stop or reverse the course of the disorder. Tetrabenazine and deuterabenazine are drugs prescribed for treating the chorea associated with HD.  Antipsychotic drugs may also help to alleviate chorea and can be used to help control hallucinations, delusions, and violent outbursts associated with the disease. Drugs may be prescribed to treat depression and anxiety, which are relatively common among HD sufferers. Drugs used to treat HD may have side effects such as fatigue, sedation, decreased concentration, restlessness, or hyper-excitability, and only should be used when symptoms create problems for the individual.
 
The Emory Study

In June 2017 scientists from the Emory University School of Medicine in Atlanta, USA, published findings of an animal study in the Journal of Clinical Investigation, which used the gene editing technique CRISPR-Cas9 to “silence” the mutate huntingtin gene (mHTT) in mice brains.

Study leader Xiao-Jiang Li, professor and expert in molecular mechanisms of inherited neuro-degeneration, used adult mice engineered to have the same mutant Huntington's-causing gene as humans, and were already showing signs of the disease. Using CRISPR-Cas9 Xiao-Jiang introduced genetic changes in an afflicted region of the brain that prevented further production of the faulty huntingtin gene. After 3 weeks, researchers noted that the brain region where the vector was applied, the mice brains showed that the aggregated proteins had almost disappeared, and there was a concomitant improvement in their physical functions; although not to the levels of the control mice.

The Emory research team’s findings showed that CRISPR-Cas9 successfully silenced part of a gene that produces toxic protein aggregates in the brains of mice, and demonstrated that the technique holds out the possibility of a one-time solution for HD.
 
The UCL study
 
What the Emory study achieved in mice the UCL study achieved in humans. The UCL study of the huntingtin-lowering drug Ionis-HTTRx led by Tabrizi and sponsored by Ionis Pharmaceuticals, a US$6bn NASDAQ traded company based in Carlsbad, California, used a similar technique as the Emory study to “silence” the mutated huntingtin gene. The study, which had been in pre-clinical development for over a decade, enrolled 46 human patients with early HD in 9 study centers in the UK, Germany and Canada. Each patient received 4 doses of either Ionis-HTTRx or a placebo, which were given one month apart by injection into the spinal fluid to enable the drug to reach the brain. As the study progressed, the dose of Ionis-HTTRx was increased several times according to the ascending-dose study design.
Orphan drug

Ionis-HTTRx is a so-called antisense drug, which means that it inhibits the expression of the huntingtin gene and therefore reduces the production of the mutant huntingtin protein (mHTT) in patients with HD.  In January 2016 Ionis-HTTRx received orphan drug designation from the US Food and Drug Administration (FDA), and the European Medicines Agency. This is a special status given to drugs that are not developed by the pharmaceutical industry for economic reasons but which respond to public health need.
You might also be interested in:
 
 
UCL study extended

Ionis-HTTRx was found to reduce the amounts of the mutant huntingtin gene that caused HD in the patients tested. It was also found to have an acceptable safety and tolerability profile.  It is too early to call Tabrizi and her colleagues’ findings a “cure” for HD, as the study was too small and not long enough to demonstrate whether patients’ clinical symptoms improve over time. Long-term data are necessary to show whether lowering the mHTT will effectively change the course of the disease. Notwithstanding the study’s findings point to the prospect of effective future treatments.
 
As a result of the study’s success, Ionis’s partner, Roche, a Swiss multinational healthcare company, has exercised its option and paid US$45m to license Ionis-HTTRx and assume responsibility for its further development, regulatory activities and commercialization. A future open-label extension study is expected to assess the effect of Ionis-HTTRx on the progression of HD, and Ionis Phamaceuticals announced that all patients in the completed study would be offered a place in the extension study.
 
Gene silencing

Gene silencing, the technique used in the both the UCL and Emory studies, relies on the fact that cells do not directly copy DNA into protein, but instead make a rough copy from a chemical called RNA, which acts as a “messenger” carrying instructions from DNA that control proteins. Gene silencing techniques target the RNA message: cutting it up, and thereby stopping the cell from making the mutant protein. However, even if gene silencing works to reduce the level of the harmful huntingtin gene, as it did in both the UCL and Emory studies, it does not change the DNA, and a HD mutation carrier still has the mutant HD gene. The “silencing” simply stops the gene making any mutant protein. Rather than silencing the mutant huntingtin gene it would be more efficacious if scientists could cut out the extra copies of the mutation that causes the disease.
 
CRISPR

CRISPR allows scientists to easily and inexpensively find and alter virtually any piece of DNA in any species. The technology potentially offers a cure for a number of incurable diseases, but its use in humans is not only ethically controversial, but also challenged by a need to find efficacious ways to deliver gene editing techniques inside the human body. Notwithstanding, there is a global race to push the technique to its limits.
 
Despite the potential of gene editing technology, scientists have encountered significant delivery challenges in using CRISPR techniques in humans for HD. Because CRISPR therapies are based on big protein molecules, they cannot be taken as a pill, but have to be delivered into the brain using injections, packaged into viruses, or similar technology. This presents delivery challenges, and the efficacy of gene editing therapies for neurodegenerative disorders is predicated upon effective delivery.

 
Takeaways

The UCL study significantly reduced the relevant protein levels in the cerebrospinal fluid of patients with Huntington’s. CRISPR’s success with HD raises the possibility that the technique might work for other neurodegenerative disorders such as Alzheimer’s. However, the genetic causes of Alzheimer’s and other neurodegenerative disorders are less well understood and more complex than Huntington’s, which makes them potentially more challenging. Further there are still significant scientific and ethical challenges to be overcome before gene-editing technology becomes common practice.
view in full page

Oesophageal Cancer


Epidemiology

Carcinoma of the oesophagus is a common, aggressive tumour. Several histological types are seen, almost all of which are epithelial in origin. The vast majority of these tumours will be either squamous cell carcinoma (SCC) or adenocarcinoma (AC).

Over a period of two decades the incidence of SCC has remained relatively stable or declined (particularly associated with smoking and alcohol), whilst there has been a rapid rise in the amount of AC seen, particularly in Caucasian males. This has now overtaken SCC as the most common form of oesophageal tumour in some developed countries.

The majority of cases (80-85%) are diagnosed in less developed countries; most of these are SCC.

Incidence

Carcinoma of the oesophagus is the 8th most common cancer in the world. Annual incidence of 18.0 per 100,000 in men and 8.5 per 100,000 in women. The male:female ratio for the adenocarcinoma subgroup is 52:10.

An average of 42% of cases were diagnosed in people aged 75 years and over, with more than eight out of ten (83%) occurring in those aged 60 and over.

The incidence of oesophageal carcinoma varies considerably with geographical location, with high rates in China and Iran, where it has been directly linked to the preservation of food using nitrosamines. AC is seen more frequently in Caucasian populations, whereas SCC is more frequent in people of African descent.

 


Hazardous aspects

The use of tobacco and alcohol are strong risk factors for both SCC and AC and have a synergistic effect in this respect for SCC and additive effect for AC. Cigarette smoking is associated with a 10-fold increase in risk for SCC and a 2- to 3-fold increase in risk for AC.

The relative increase in risk caused by smoking remains high for AC, even after 30 years of giving up smoking, but reduces within 10 years for SCC.

Barrett’s oesophagus, which is a precursor of AC.

Chronic inflammation and stasis from any cause increase the risk of oesophageal SCC – eg, strictures due to caustic injury or achalasia.

Tylosis and Paterson-Brown-Kelly syndrome are also associated with an increased risk for SCC. Obesity has been linked with increased risk for AC but reduced risk for SCC. Obesity increases the risk of gastro-oesophageal reflux disease (GORD), in turn increasing the risk of Barrett’s oesophagus.

The relationship between obesity and the rise in AC has, however, been questioned. A review of the Connecticut Tumor Registry data between 1940-2007 showed that the increase in AC seen in the 1960s predated the rise in obesity by a decade. The authors of the review propounded that this may have been linked to a decrease in the incidence of Helicobacter pylori infection or environmental factors.

One Japanese study showed a link between oesophageal cancer and tooth loss.

A family history of hiatal hernia is a risk factor for oesophageal adenocarcinoma, and some people appear to have a genetic predisposition to developing types of gastro-oesophageal cancers.

view in full page
  • A new gene editing study is poised on the cusp of medical history because it holds out the prospect of providing a cure for hemophilia
  • Hemophilia is a rare incurable life-threatening blood disorder
  • People with hemophilia have little or no protein needed for normal blood clotting
  • Severe forms of the disorder may result in spontaneous and excessive bleeding
  • In recent history many people with hemophilia died before they reached adulthood because of the dearth of effective treatments
  • A breakthrough therapy in the 1980s was contaminated with deadly viruses
 
A cure for hemophilia?

A study led by researchers from Barts Health NHS Trust and Queen Mary University London and published in a 2017 edition of the New England Journal of Medicine has made a significant step forward towards finding a cure for hemophilia A, a rare incurable life threatening-blood disorder, which is caused by the failure to produce certain proteins required for blood clotting. In recent history only a few people with hemophilia survived into adulthood. This was because of the dearth of effective treatments and any small cut or internal hemorrhaging after even a minor bruise was often fatal.
 
The royal disease

There are 2 main types of hemophilia: A and B.  Both are rare congenital bleeding disorders sometimes referred to as “the royal disease,” because in the 19th and 20th centuries hemophilia affected European royal families. Queen Victoria of England is believed to have been a carrier of hemophilia B, a rarer condition than hemophilia A. 2 of Victoria’s 5 daughters (Alice and Beatrice) were also carriers.  Through marriage they passed on the mutation to various royal houses across Europe including those of Germany, Russia and Spain. Victoria’s son Prince Leopold was diagnosed with hemophilia A when he was a child. He died at 31 and throughout his life had a constant staff of doctors around him.
 
Epidemiology

The worldwide incidence of hemophilia A is about 1 in 5,000 males, with approximately 33% of affected individuals not having a family history of the disorder, which in their cases result from a new mutation or an acquired immunologic process. Only 25% of people with hemophilia receive adequate treatment; most of these are in developed nations. In 2016 there were some 7,700 people diagnosed with the condition in the UK, 2,000 of whom had a severe form with virtually no blood clotting protein. In the US there are some 20,000 people living with the disorder. Morbidity and death from hemophilia are primarily the result of haemorrhage, although HIV and hepatitis infections became prominent in patients who received therapies with contaminated blood products prior to the mid-1980s: see below.
 
Etiology
Hemophilia A and B are similar disorders. Both are caused by an inherited or acquired genetic mutation, which reduces or eliminates the coagulation genes referred to as factor VIII for hemophilia A, and factor IX for hemophilia B. Factors VIII and IX are essential blood clotting proteins, which work with platelets to stop or control bleeding. The amount of the protein present in your blood and its activity determines the severity of symptoms, which range from mild to severe. Factors VIII and IX deficiencies are the best-known and most common types of hemophilia, but other clotting factor deficiencies also exist. Factors VIII and IX are encoded in genes and located on the X chromosomes, which come in pairs. Females have 2 X chromosomes, while males have 1 X and 1 Y chromosome. Only the X chromosome carries the genes related to clotting factors. A male who has a hemophilia gene on his X chromosome will have hemophilia. Since females have 2 X chromosomes, a mutation must be present in both copies of the gene to cause the hemophilia. When a female has a hemophilia gene on only 1 of her X chromosomes, she is a "carrier” of the disorder and can pass the gene to her children. Sometimes carriers have low levels of a clotting factor and therefore have symptoms of hemophilia, including bleeding.

 

Hemophilia A and B

Hemophilia A and B affect all races and ethnic groups equally. Hemophilia B is the second most common type of hemophilia and is less common than factor VIII deficiency. Notwithstanding, the result is the same for people with hemophilia A and B: they both bleed more easily and for a longer time than usual. The differences between hemophilia A and B are in the factor that is either missing or at a low level. The treatments to replace factors A and B are different. Hemophilia A needs to be treated with factor VIII, and hemophilia B with factor IX. Giving factor VIII to someone with hemophilia B will not help to stop the bleeding.
You might also be interested in:
 
 
Mild to severe hemophilia

People with mild hemophilia have few symptoms on a day-to-day basis, but may bleed excessively for example during surgery, whilst those with a severe form of the disorder may have spontaneous bleeds. Severe hemophilia tends to be diagnosed in childhood or as part of screening in families known to have bleeding disorders. People who do not have hemophilia have a factor VIII activity of 100%, whereas people who have severe hemophilia A have a factor VIII activity of less than 1%. In severe forms, even the slightest injury can result in excessive bleeding as well as spontaneous internal bleeding, which can be life threatening. Also, the pressure of massive bleeding into joints and muscles make hemophilia one of the most painful diseases known to medicine. Without adequate treatment, many people with hemophilia die before they reach adulthood. However, with effective replacement therapy, life expectancy is about 10 years less than that of males without hemophilia, and children can look forward to a normal life expectancy. Replacement therapy entails concentrates of clotting factor VIII (for haemophilia A) or clotting factor IX (for haemophilia B) being slowly dripped or injected into a vein to help replace the clotting factors that are missing or low.
 
Brief history of treatments

In the 1950s and 60s fresh frozen plasma (FFP) was the principal therapy for hemophilia A and B. However, because each bag of FFP contained only very small amounts of the clotting agents, large amounts of plasma had to be transfused to stop bleeding episodes and people with the conditions had to be hospitalized. In some countries FFP is still the only product available for treating hemophilia.
 
In the mid-1960s Judith Pool, an American scientist, made a significant advance in haemophilia therapy when she discovered that the sludge, which sank to the bottom of thawing plasma was rich in factor VIII (but not IX) and could be frozen and stored as “cryoprecipitate plasma”. This more concentrated clotting factor VIII became the preferred treatment for severe hemophilia A as it required smaller volumes and patients could receive treatment as outpatients. Notwithstanding cryoprecipitate is less safe from viral contamination than concentrates and is harder to store and administer.

 
The tainted blood scandal

In the early 1970s drug companies found they could take the clotting factors VIII and IX out of blood plasma and freeze-dry them into a powder. This quickly became the treatment of choice as it could be used to treat hemophilia at home. There was a huge demand for the new freeze-dried product, and drug companies distilled the plasma of large groups of donors, sometimes as many as 25,000, to meet the demand. This led companies seeking substantial supplies of blood to pay prisoners and others to give blood. Some donors were addicted to drugs and infected with the HIV virus and hepatitis C. By the early 1980s, human blood, plasma and plasma-derived products used in therapies for hemophilia were discovered to be transmitting potentially deadly blood-borne viruses, including hepatitis viruses and HIV. So the same advanced substance being used to treat people with hemophilia was also responsible for causing sufferers prolonged illnesses and premature death.
 
Infected hemophilia treatments in the UK

A report published in 2015 by a UK All Party Parliamentary Group on Haemophilia found that 7,500 people in Britain with the disorder were infected with the contaminated blood products. According to Tainted Blood, a group set up in 2006 to campaign on behalf of people with hemophilia, 4,800 people were infected with hepatitis C, a virus that causes liver damage and can be fatal. Of these, 1,200 were also infected with HIV, which can cause AIDS, and some 2,400 sufferers died prematurely.
 
A 2017 UK official enquiry
 
In 1991 the UK government made ex-gratia payments to hemophilia patients infected with HIV, averaging £60,000 each, on condition that they dropped further legal claims. The extent of infection with hepatitis C was not discovered until years later. Campaigners unearthed evidence suggesting that UK officials in the Department of Health knew or suspected that the imported factor concentrates were risky as early as 1983. Notwithstanding, NHS England is said to have continued to administer the contaminated concentrates to patients with hemophilia. In 2017 the UK government set up an inquiry into the NHS contaminated blood scandal.  
 
A new scientific era

In the early 1980s, soon after HIV was identified, another significant breakthrough occurred in the treatment of hemophilia when manufacturers used genetically engineered cells that carry a human factor gene (called recombinant products). Today, all commercially prepared factor concentrates are treated to remove or inactivate blood-borne viruses. Also, scientists have a better understanding of the etiology of the disease and are able to detect and measure its inhibitors and know how to eliminate them by manipulating the immune system.
 
A cure for haemophilia A

Researchers, led by John Pasi, Director of the Haemophilia Centre at Barts Health NHS Trust and Professor of Haemostasis and Thrombosis at Queen Mary University London, have successfully carried out the first gene editing study for hemophilia A. The study enrolled 13 patients across England and injected them with a copy of their missing gene, which allows their cells to produce the essential blood-clotting agent factor VIII. Researchers followed participants for up to 19 months, and findings showed that 85% had normal or near normal levels of the previously missing factor VIII clotting agent and all participants were able to stop their previously regular haemophilia A treatment: they were effectively cured.
 
Gene editing
Gene editing is particularly relevant for diseases such as hemophilia A where, until the recent UK study reported in this Commentary, there was no cure. Gene editing allows doctors to prevent and treat a disorder by inserting a healthy gene into a patient’s cells to replace a mutated or missing gene that causes the disease. The technique has risks and is still under consideration to ensure that it is safe and effective. In 2015, a group of Chinese scientists edited the genomes of human embryos in an attempt to modify the gene responsible for β-thalassemia, another potentially fatal blood disorder.

 
Expanding the study

According to Pasi, "We have seen mind-blowing results, which have far exceeded our expectations. When we started out we thought it would be a huge achievement to show a 5% improvement, so to actually be seeing normal or near normal factor levels with dramatic reduction in bleeding is quite simply amazing. We really now have the potential to transform care for people with haemophilia using a single treatment for people who at the moment must inject themselves as often as every other day." Pasi and his colleagues are expected to undertake further studies with participants from the USA, Europe, Africa and South America.
 
Takeaway

Hemophilia is a life-changing, often painful and debilitating disorder. In recent history there was a dearth of effective therapies and people with the disorder barely survived into adulthood.  More recent scientific advances that used concentrated blood products to improve treatment were contaminated with deadly viruses, which further destroyed the lives of sufferers, and in many cases led to their premature death. The study, undertaken by Pasi and his colleagues, is on the cusp of medical history because it has the potential to provide a cure for what has been an incurable life-changing disease. Notwithstanding, it is worth bearing in mind that scientific discovery is rarely quick and rarely proceeds in a straight line.
view in full page
  • In high-income countries populations are aging
  • By 2050 the world population of people over 60 is projected to reach 2bn
  • Age-related low back pain is the highest contributor to disability in the world
  • Over 80% of people will experience back pain at some point in their life
  • Older people with back pain have a higher chance of dying prematurely
  • The causes of back pain are difficult to determine which presents challenges for the diagnosis and management of the condition
  • The US $100bn-a-year American back pain industry is “ineffective
  • Each year 10,000 and 300,000 spine fusion surgeries are carried out in the UK and US respectively
  • 20% of spinal fusion surgeries are undertaken without good evidence
  • In 10 to 39% of spine surgery patients pain continues or worsens after surgeries
 
Age of the aged and low back pain
 
A triumph of 20th century medicine is that it has created the “age of the aged”. By 2050 the world population of people aged 60 and older is projected to be 2bn, up from 900m in 2015. Today, there are 125m people aged 80 and older and by 2050 there is expected to be 434m people in this age group worldwide. The average age of the UK population has reached 40. Some 22% will be over 65 by 2031, and this will exceed the percentage of the UK population under 25. 33% of people born today in the UK can expect to live to 100. However, this medical success is the source of rapidly increasing age-related disorders, which present significant challenges for the UK and other high-income nations. Low back pain (LBP) is the most common age-related pain disorder, and ranked as the highest contributor to disability in the world. 
 
At some point back pain affects 84% of all adults in developed economies. Research published in 2017 in the journal Scoliosis Spinal Disorders suggests that LBP is the most common health problem among older adults that results in pain and disability. The over 65s are the second most common age group to seek medical advice for LBP, which represents a significant and increasing workload for health providers. Each year back pain costs the UK and US Exchequers respectively some £5bn and more than US635bn in medical treatment and lost productivity. LBP accounts for 11% of the total disability of the respective populations. This Commentary discusses therapies for LBP, and describes the changing management landscape for this vast and rapidly growing condition.

 

Your spine and LBP

 

Your spine, which supports your back, consists of 24 vertebrae, bones stacked on top of one another.  At the bottom of your spine and below your vertebrae are the bones of your sacrum and coccyx. Threading through the entire length of your vertebrae is your spinal cord, which transmits signals from your brain to the rest of your body. Your spinal cord ends in your lower back, and continues as a series of nerves, which resemble a horse’s tail, hence its medical name, ‘cauda equine’. Between each vertebra are discs. In younger people discs contain a high degree of water. This gives them the ability to act like shock absorbers. During the normal aging process discs lose much of their water content and degenerate. Such degenerative spinal structures may result in a herniated disc when the disc nucleus extrudes through the disc’s outer fibres, or a compression of nerve roots, which may lead to radiculopathy. This is a condition more commonly known as sciatica, which is pain caused by compression of a spinal nerve root in the lower back that is often associated with the degeneration of an intervertebral disc, and can manifest itself as pain, numbness, or weakness of the buttock and outer side of the leg.

 

Challenges in diagnosis
 
Because your back is comprised of so many connected tissues, which include bones, muscles, ligaments, nerves, tendons, and joints, it is often difficult for doctors to say with confidence what causes back pain even with the help of X-rays and MRI scans. Usually, LBP does not have a serious cause. In the majority of cases LBP will reduce and often disappear within 4 to 6 weeks, and therefore can be self-managed by keeping mobile and taking over-the-counter painkillers. However, in a relatively small proportion of people with LBP, the pain and disability can persist for many months or even years. Once LBP has been present for more than a year few people return to normal activities. There is not sufficient evidence to suggest definitive management pathways for this group that accounts for the majority of the health and social costs associated with LBP.
 
Assessing treatment options for back pain

Ranjeev Bhangoo, a consultant neurosurgeon at Kings’ College Hospital Trust, London, and the London Neurosurgery Partnership describes the nature and role of intervertebral discs and how treatment options should be assessed.

When a person presents with a problem in the lower back, which might manifest as leg or arm pain, you need to ask 3 questions: (i) is the history of the pain compatible with a particular disc causing the problem?  (ii) Does an examination suggest that a particular disc is causing a problem? And (iii) does a scan show that the disc you thought was the problem is the problem? If all 3 answers align, then there maybe some good reason to consider treatment options. If the 3 answers are not aligned, be weary of a surgeon suggesting intervention because 90% of us will experience back pain at some point in our lives, and 90% of the population don’t need back surgery.”
 
 
Back pain requiring immediate medical attention
 
Although the majority of LBP tends to be benign and temporary, people should seek immediate medical advice if their back pain is associated with certain red flags such as loss of bladder control; loss of weight, fever, upper back or chest pain; or if there is no obvious cause for the pain; or if the pain is accompanied by weakness, loss of sensation or persistent pins and needles in the lower limbs. Also, people with chronic lifetime conditions such as cancer should pay particular attention to back pain.
 
Epidemiology of LBP

Back pain affects approximately 700m people worldwide. A 2011 report by the US Institute of Medicine, estimates that 100m Americans are living with chronic back pain, which is more than the total affected by heart disease, cancer, and diabetes combined. This represents a vast market for therapies that include surgery and the prescription of opioids. Estimates of the prevalence of LBP vary significantly between studies. There is no convincing evidence that age affects the prevalence of back pain, and published data do not distinguish between LBP that persists for more than, or less than, a year. Each year LBP affects some 33% of UK adults, and around 20% of these - about 2.8m - will consult their GP. One year after a first episode of back pain, 62% of people still experience pain, and 16% of those initially unable to work are not working after 1 year. Typically in about 60% of cases pain and disability improve rapidly during the first month after onset.

 

Non-invasive therapies for LBP

The most common non-invasive treatment for LBP is non-steroidal anti-inflammatory drugs (NSAIDs), but also other pain medication may include paracetamol, oral steroids, gabapentin/pregabalin, opioids and muscle relaxants, antidepressants, chiropractic manipulation, osteopathy, epidural injections, transcutaneous electrical nerve stimulation (TENS), ultrasound that uses vibration to deliver heat and energy to parts of the lower back, physiotherapy, massage, and acupuncture.
You might also be interested in:

Medical cannabis and modern healthcare
 

 
Prelude to surgery
 
Despite the range of non-invasive therapies for LBP, the incidence of lumbar spinal fusion surgery for ordinary LBP increased significantly over the past 2 decades without definitive evidence of the efficacy of the procedure. Recent guidelines from UK and US regulatory bodies have instructed doctors to consider more conservative therapies for the management of back pain, and this has resulted in the reduction in the incidence of spinal fusion surgeries.
 
Notwithstanding, because there has been clear recognition of the paucity of evidence for reliable rates of improvement following fusion for back pain surgery, it does not necessarily follow that fusions should never be done and indeed there are many instances where fusions are strongly supported by evidence. The gold standard for diagnosing degenerative disc disease is MRI evidence, which has formed the principal basis for surgical decisions in older adults. However, studies suggest that although MRI evidence indicates that degenerative change in the lumbar spine is common among people over 60, the overwhelming majority do not have chronic LBP.
 
Increasing prevalence of spinal fusion surgery
 
Each year, NHS England undertakes some 10,000 spinal surgeries for LBP at a cost of some £200m, which is in addition to the large and growing number of patients receiving epidurals that cost the NHS about £9bn a year, and they too have low evidence as to their efficacy. In the US more than 300,000 back surgeries are performed each year. In 10 to 39% of these cases, pain may continue or even worsen after surgery; a condition known as ‘failed back surgery syndrome’. In the US, about 80,000 new cases of failed back surgery syndrome are accumulated each year. Pain after back surgery is difficult to treat, and many patients are obliged to live with pain for the rest of their lives, which causes significant disability.
  
Back pain and premature death
 
A study by researchers from the University of Sydney published in 2017 in the European Journal of Pain found that older people with persistent chronic back pain have a higher chance of dying prematurely. The study examined the prevalence of back pain in nearly 4,400 Danish twins over 70. They then compared their findings with the death registry and concluded that, "Older people reporting spinal pain have a 13% increased risk of mortality per year lived, but the connection is not causal." According to lead author Matthew Fernandez, “This is a significant finding as many people think that back pain is not life-threatening.” Previous research has suggested that chronic pain can wear down peoples’ immune systems and make them more vulnerable to disease.
 
Spinal fusion
 
While recognizing that a relatively small group of elite spine surgeons, mostly from premier medical institutions, regularly carry out essential complex surgeries required for dire and paralysis-threating conditions such as traumatic injuries, spinal tumors, and congenital spinal abnormalities, the majority of procedures undertaken by a significant number of spine surgeons have been elective fusion procedures for people diagnosed with pain, which is referred to as “axial”, “functional” and “ non-specific”.  People most likely to benefit from spine surgery are the young, fit and healthy. This is according to a study undertaken by the American Spine Research AssociationNotwithstanding, the study also suggests that the typical American candidate for spinal fusion surgery is an overweight, over 55 year old smoker on opioids.
 
Steady growth projected for the spinal fusion market

The spine surgery market is relatively mature and dominated by a few global corporations: Medtronic, DePuy, Stryker, and Zimmer-Biomet. According to a 2017 report from the consulting firm GlobalData the market for spinal fusion, which includes spinal plating systems, interbody devices, vertebral body replacement devices, and pedicle screw systems is set to rise from approximately US$7bn in 2016 to US$9bn by 2023, representing a compound annual growth rate of 3.4%. The increasing prevalence of age-related degenerative spinal disorders, and continued technological advances in spinal fusion surgeries, such as expandable interbody cages and navigation systems, and the increased adoption of minimally invasive techniques, have driven this relatively steady market growth.
 
Spinal fusion surgery

Lumbar spinal fusion surgery has been performed for decades. It is a technique, which unites - fuses - 1 or more vertebrae to eliminate the motion between them. The procedure involves placing a bone graft around the spine, which, over time, heals like a fracture and joins the vertebrae together. The surgery takes away some spinal flexibility, but since most spinal fusions involve only small segments of the spine the surgery does not limit motion significantly.
 
Lumbar spinal fusion

Fusion using bone taken from the patient - autograft - has a long history of use, results in predictable healing, and currently is the “gold standard” source of bone for a fusion. One alternative is an allograft, which is cadaver bone that is typically acquired through a bone bank. In addition, several artificial bone graft materials have been developed, and include: (i) demineralized bone matrices (DBMs), which are created by removing calcium from cadaver bone. Without the mineral the bone can be changed into putty or a gel-like consistency and used in combination with other grafts. Also it may contain proteins that help in bone healing; (ii) bone morphogenetic proteins (BMPs), which are powerful synthetic bone-forming proteins that promote fusion, and have FDA approval for certain spine procedures, and (iii) ceramics, which are synthetic calcium/phosphate materials similar in shape and consistency to the patient’s own bone.
 
Different approaches to fusion surgery

Spinal fusion surgery can be either minimally invasive (MIS) or open. The former is easily marketable to patients because smaller incisions are often perceived as superior to traditional open spine surgery. Notwithstanding, open fusion surgery may be performed using surgical techniques that are considered "minimally invasive", because they require relatively small surgical incisions, and do minimal muscle or other soft tissue damage. After the initial incision, the surgeon moves the muscles and structures to the side to see your spine. The joint or joints between the damaged or painful discs are then removed, and then screws, cages, rods, or pieces of bone grafts are used to connect the discs and keep them from moving. Generally, MIS decreases the muscle retraction and disruption necessary to perform the same operation, in comparison to the traditional open spinal fusion surgery, although this depends on the preferences of individual surgeons. The indications for MIS are identical to those for traditional large incision surgery. A smaller incision does not necessarily mean less risk involved in the surgery.

There are three main approaches to fusion surgery, (i) the anterior procedure, which approaches your spine from the front and requires an incision in the lower abdomen, (ii) a posterior approach is done from your back, and (iii) a lateral approach from your side.

 
Difficulty identifying source of back pain
 
A major obstacle to the successful treatment of spine pain by fusion is the difficulty in accurately identifying the source of a patient’s pain. The theory is that pain can originate from spinal motion, and fusing the vertebrae together to eliminate the motion will get rid of the pain. Current techniques to precisely identify which of the many structures in the spine could be the source of a patient’s back pain are not perfect. Because it can be challenging to locate the source of pain, treatment of back pain alone by spinal fusion is somewhat controversial. Fusion under these conditions is usually viewed as a last resort and should be considered only after other nonsurgical measures have failed.
 
Spinal fusion surgery is only appropriate for a very small group of back pain sufferers

Nick Thomas, also a consultant neurosurgeon at King’s College Hospital Trust, London and the London Neurosurgery Partnership suggests there are a scarcity of preoperative tests to indicate whether spinal lumbar fusion surgery is appropriate, and stresses that spinal fusion is appropriate only for a small group of patients who present with back pain.
 
The overwhelming majority of patients who present with low back pain will be treated non operatively. In a few very select cases, spinal fusion may be appropriate. A challenge in managing low back pain is that there are precious few pre-operative investigations that give a clear indication of whether a spinal fusion may or may not work. Even with MRI evidence it can be very difficult to determine whether changes in a disc are the result of the normal process of degeneration or whether they reflect a problem that might be generating the back pain. If patients fail to respond to non-operative treatments they may well consider spinal fusion. A very small group of patients, who present with a small crack in one of the vertebrae bones - pars defect - or slippage of the vertebrae - spondylolisthesis - may favorably respond to spinal fusion. In patients where the cause of the back pain is less clear the success rate of spinal fusion is far less.” See video:
 
 
Back pain industry

In a new book entitled Crooked published in 2017, investigative journalist Cathryn Jakobson Ramin suggests that the US $100bn a year back pain industry is, “often ineffective, and sometimes harmful”. Ramin challenges the assumptions of a range of therapies for back pain, including surgery, epidurals, chiropractic methods, physiotherapy, and analgesics. She is particularly damning about lumbar spinal fusion surgery.  In the US 300,000 of such procedures are carried out each year at a cost of about $80,000 per surgery. Ramin suggests these have a success rate of 35%.
 
Over a period of 6 years Ramin interviewed spine surgeons, pain specialists, physiotherapists, and chiropractors. She also met with patients whose pain and desperation led them to make life-changing decisions. This prompted her to investigate evidence-based rehabilitation options and suggest how these might help back pain sufferers to avoid the range of current therapies, save time and money, and reduce their anxiety. According to Ramin people in pain are poor decision makers, and the US back pain industry exemplifies the worst aspects of American healthcare. But this is changing.
 
New Guidelines for LBP
 
In February 2017, the American College of Physicians published updated guidelines, which recommended surgery only as a last resort. Also, it said that doctors should avoid prescribing opioid painkillers for relief of back pain, and suggested that before patients try anti-inflammatories or muscle relaxants, they should try alternative therapies such as exercise, acupuncture, massage therapy or yoga. Doctors should reassure their patients that they would get better no matter what treatment they try. The guidelines also said that steroid injections were not helpful, and neither was paracetamol, although other over-the-counter analgesics such as aspirin or ibuprofen could provide some relief. The UK’s National Institute for Health and Care Excellence (NICE) has also updated its guidelines (NG59) for back pain management. These make it clear that in a significant proportion of back pain surgeries is not efficacious. The new guidelines instruct doctors to recommend various aerobic and biomechanical exercise, NHS England and private health insurers are changing their reimbursement policies. As a consequence the incidence of back surgeries have fallen significantly.
 
In perspective

Syed Aftab, a Consultant Spinal Orthopaedic Surgeon at the Royal London, Barts Health NHS Trust, welcomes the new guidelines, but warns that, “We should be careful that an excellent operation preformed by some surgeons on some patients does not get ‘vilified’. If surgeons stop preforming an operation because of the potential of being vilified, patients who could benefit from the procedure lose out”.
 
Surgical cycle

There seems to be a 20-year cycle for surgical procedures such as lumbar fusion. The procedure starts, some patients benefit and do well. This encourages more surgeons to carry out the procedure. Over time, indications become blurred, and the procedure is more widely used by an increasing number of surgeons. Not all patient do well. This leads to surgeons being scrutinized, some vilified, the procedure gets a bad name, surgeons stop preforming the operation, and patients who could benefit from the procedure lose out,” says Aftab, who is also a member of Complex Spine London, a team of spinal surgeons and pain specialists who focus on an evidence based multidisciplinary approach to spinal pathology.
 
Takeaway
 
LBP is a common disabling and costly health challenge. Although therapies are expensive, not well founded on evidence, and have a relatively poor success rate, their prevalence has increased over the past 2 decades, and an aging population does not explain this entirely. Although the prevalence of lumbar spinal fusion surgery has decreased in resent years, the spine has become a rewarding source of income for global spine companies, and also there have been allegations of conflicts of interest in this area of medicine. With the new UK and US guidelines the tide has changed, but ethical questions albeit historical still should be heeded.
view in full page
  • Everyone connected with healthcare supports interoperability saying it improves care, reduces medical errors and lowers costs
  • But interoperability is a long way from reality and electronic patient records are only part of an answer
  • Could Blockchain a technology disrupting financial systems resolve interoperability in healthcare?
  • Blockchain is an open-source decentralized “accounting” platform that underpins crypto currencies
  • Blockchain does not require any central data hubs, which in healthcare have been shown to be easily breached
  • Blockchain technology creates a virtual digital ledger that could automatically record every interaction with patient data in a cryptographically verifiable manner
  • Some experts believe that Blockchain could improve diagnosis, enhance personalised therapies, and prevent highly prevalent devastating and costly diseases
  • Why aren’t healthcare leaders pursuing Blockchain with vigour?
 
Why Blockchain technology will not disrupt healthcare

Blockchain technology is disrupting financial systems by enhancing the reconciliation of global transactions and creating an immutable audit trail, which significantly enhances the ability to track information at lower costs, while protecting confidentiality. Could Blockchain do something similar for healthcare and resolve the challenges of interoperability by providing an inexpensive and enhanced means to immutably track, store, and protect a variety of patient data from multiple sources, while giving different levels of access to health professionals and the public?
 
Blockchain and crypto currencies

You might not have heard of Blockchain, but probably you have heard of bitcoin; an intangible or crypto currency, which was created in 2008 when a programmer called Satoshi Nakamoto (a pseudonym) described bitcoin’s design in a paper posted to a cryptography e-mail list. Then in early 2009 Nakamoto released Blockchain: an open source, global decentralized accounting ledger, which underpins bitcoin by executing and immutably recording transactions without the need of a middleman. Instead of a centrally managed database, copies of the cryptographic balance book are spread across a network and automatically updated as transactions take place. Bitcoin gave rise to other crypto-currencies. Crypto currencies only exist as transactions and balances recorded on a public ledger in the cloud, and verified by a distributed group of computers.
 
Broad support for interoperability
 
Just about everyone connected with healthcare - clinicians, providers, payers, patients and policy makers - support interoperability, suggesting data must flow rapidly, easily and flawlessly through healthcare ecosystems to reduce medical errors, improve diagnosis, enhance patient care, and lower costs. Despite such overwhelming support, interoperability is a long way from a reality. As a result, health providers spend too much time calling other providers about patient information, emailing images and records, and attempting to coordinate care efforts across disjointed and disconnected healthcare systems. This is a significant drain on valuable human resources, which could be more effectively spent with patients or used to remotely monitor patients’ conditions. Blockchain may provide a solution to challenges of interoperability in healthcare.
 
Electronic patient records do not resolve interoperability

A common misconception is that electronic patient records (EPR) resolve interoperability. They do not. EPRs were created to coordinate patient care inside healthcare settings by replacing paper records and filing cabinets. EPRs were not designed as open systems, which can easily collect, amalgamate and monitor a range of medical, genetic and personal information from multiple sources. To realize the full potential and promise of interoperability EPRs need to be easily accessible digitally, and in addition, have the capability to collect and manage remotely generated patient healthcare data as well as pharmacy and prescription information; family-health histories; genomic information and clinical-study data. To make this a reality existing data management conventions need to be significantly enhanced, and this is where Blockchain could help.

 

Blockchain will become a standard technology
 
Think of a bitcoin, or any other crypto currency, as a block capable of storing data. Each block can be subdivided countless times to create subsections. Thus, it is easy to see that a block may serve as a directory for a healthcare provider. Data recorded on a block can be public, but are encrypted and stored across a network. All data are immutable except for additions. Because of these and other capabilities, it seems reasonable to assume that Blockchain may become a standard technology over the next decade.
 
You might also be interested in:

The IoT and healthcare  
 
and

Future healthcare shock

Blockchain and healthcare

Because crypto currencies are unregulated and sometimes used for money laundering, they are perceived as “shadowy”. However, this should not be a reason for not considering Blockchain technology. 30 corporations, including J.P. Morgan and Microsoft, are uniting to develop decentralized computing networks based on Blockchain technology. Further crypto currencies are approaching the mainstream,  and within the financial sector, there is significant and growing interests in Blockchain technology to improve interoperability. Financial services and healthcare have similar interoperability challenges, but health providers appear reluctant to contemplate fundamental re-design of EPRs; despite the fact that there is a critical need for innovation as genomic data and personalized targeted therapies rise in significance and require advanced data management capabilities. Here are 2 brief examples, which describe how Blockchain is being used in financial services.
 
Blockchain’s use in financial services
 
In October 2017, the State Bank of India (SBI) announced its intention to implement Blockchain technology to improve the efficiency, transparency, security and confidentiality of its transactions while reducing costs. In November 2017, the SBI’s Blockchain partner, Primechain Technologies suggested that the key benefits of Blockchain for banks include, “Greatly improved security, reduced infrastructure cost, greater transparency, auditability and real-time automated settlements.”
 
Dubai, a global city in the United Arab Emirates, is preparing to introduce emCash as a crypto currency, and could become the world’s first Blockchain government by 2020. The changes Dubai is implementing eventually will lead to the end of traditional banking. Driving the transformation is Nasser Saidi, chief economists of the Dubai International Financial Centre, a former vice-governor of the Bank of Lebanon and a former economics and industry minister of that country. Saidi perceives the benefits of Blockchain to include the phasing out of costly traditional infrastructure services such as accounting and auditing.

 
Significant data challenges

Returning to healthcare, there are specific challenges facing interoperability, which include: (i) how to ensure patient records remain secure and are not lost or corrupted given that so many people are involved in the healthcare process for a single patient, and communication gaps and data-sharing issues are pervasive, and (ii) how can health providers effectively amalgamate and monitor genetic, clinical and personal data from a variety of sources, which are required to improve diagnosis, enhance treatments and reduce the burden of devastating and costly diseases. 
 
Vulnerability of patient data

Not only do EPRs fail to resolve these two basic challenges of interoperability they are vulnerable to cybercriminals. Recently there has been an epidemic of computer hackers stealing EPRs. In June 2016 a hacker claimed to have obtained more than 10m health records, and was alleged to be selling them on the dark web. Also in 2016 in the US there were hundreds of breaches involving millions of EPRs, which were reported to the Department of Health and Human Services. The hacking of 2 American health insurers alone, Anthem and Premera Blue Cross, affected some 90m EPRs.
 
In the UK, patient data and NHS England’s computers are no less secure. On 12 May 2017, a relatively unsophisticated ransomware called WannaCry, infected NHS computers and affected the health service’s ability to provide care to patients. In October 2017, the National Audit Office (NAO) published a report on the impact of WannaCry, which found that 19,500 medical appointments were cancelled, computers at 600 primary care offices were locked and five hospitals had to divert ambulances elsewhere. Amyas Morse, head of the NAO suggests that, “The NHS needs to get their act together to ensure the NHS is better protected against future attacks.”

 
Healthcare legacy systems
 
Despite the potential benefits of Blockchain to healthcare, providers have not worked out fully how to move on from their legacy systems and employ innovative digital technologies with sufficient vigour to effectively enhance the overall quality of care while reducing costs. Instead they tinker at the edges of technologies, and fail to learn from best practices in adjacent industries.  
 
“Doctors and the medical community are the biggest deterrent for change”
 
Devi Shetty, heart surgeon, founder, and Chairperson of Narayana Health articulates this failure“Doctors and the medical community are the biggest deterrent for the penetration of innovative IT systems in healthcare to improve patient care . . . IT has penetrated every industry in the world with the exception of healthcare. The only IT in patient care is software built into medical devices, which doctors can’t stop. Elsewhere there is a dearth of innovative IT systems to enhance care,” see video. Notwithstanding, Shetty believes that, “The future of healthcare is not going to be an extension of the past. The next big thing in healthcare is not going to be a new drug, a new medical device or a new operation. It is going to be IT.”
 
 
Google, Blockchain and healthcare
 
Previous HealthPad Commentaries have suggested that the failure of healthcare providers to fully embrace innovative technologies, especially those associated with patient data, has created an opportunity for giant technology companies to enter the healthcare sector, which shall dis-intermediate healthcare professionals.

In May 2017, Google announced that its AI-powered subsidiary, DeepMind Health, intends to develop the “Verifiable Data Audit”, which uses Blockchain technology to create a digital ledger, which automatically records every interaction with patient data in a cryptographically verifiable manner. This is expected to significantly reduce medical errors since any change or access to the patient data is visible, and both healthcare providers and patients would be able to securely track personal health records in real-time.

 
Takeaways

Blockchain is a new innovative and powerful technology that could play a significant role in overcoming the challenges of interoperability in healthcare, which would significantly help to enhance the quality of care, improve diagnosis, reduce costs and prevent devastating diseases. However, even if Blockchain were the perfect technological solution, which enabled interoperability, change would not happen in the short term. As Max Planck said, “A new scientific innovation does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” While we wait for those who control our healthcare systems to die, billions of people will continue to suffer from preventable lifetime diseases, healthcare costs will escalate, healthcare systems will go bankrupt, and productivity in the general economy will fall.
view in full page
  • Chronic obstructive pulmonary disease (COPD) is a lung condition, which makes it hard to breathe, but is often preventable and treatable   
  • COPD affects some 210m people worldwide, its prevalence is increasing, and it costs billions in treatment and lost production
  • By 2020 COPD is projected to be the 3rd leading cause of death worldwide
  • Recently, scientific advances have benefitted COPD research
  • But COPD researchers are challenged to provide compelling data in support of their studies
  • COPD research would benefit from smart online communications strategies
  • This could strengthen collaboration among globally dispersed scientists and people living with COPD, and expand the geographies from which COPD data are retrieved
  
Chronic Obstructive Pulmonary Disease (COPD) and the battle for breath
 
Chronic Obstructive Pulmonary Disease (COPD) is a common, preventable and treatable disorder, which affects 210m people worldwide. Its prevalence is increasing globally, and each year it causes some 3m deaths. Although COPD therapies have improved substantially in recent years, and benefit from advancing science, researchers are still challenged to provide compelling data in support of their studies. There is no definitive treatment for COPD, and more research is needed to improve the condition’s clinical management. There are regions of the world where the prevalence of COPD is increasing significantly, but where information about the disorder is sparse. This Commentary suggests that COPD research could benefit by enhancing the connectivity of globally dispersed scientists and people living with the disorder, and expanding the geographies from where COPD data are retrieved. Before suggesting ways to achieve this, let us describe COPD, and its vast and escalating burden.
 
Chronic Obstructive Pulmonary Disease (COPD)
 
COPD is an umbrella term used to describe common progressive lifetime diseases, which damage the lungs and airways, and make breathing difficult. Its prevalence is increasing especially in developing countries. It is the 4th leading cause of death worldwide and projected to be the 3rd by 2020. The causes of COPD are well known, but the nature of the condition is still not fully understood even though COPD therapies have improved significantly in recent years. The effects of COPD are persistent and progressive, but treatment can relieve symptoms, improve quality of life and reduce the risk of death. COPD impacts people differently, medications affect patients differently, and such differences make it challenging for doctors to identify patients who are at risk of a more rapidly progressing condition.

Although COPD is complex with different etiologies, pathogens and physiological effects, there are two main forms: (i) chronic bronchitis, which involves a long-term cough with mucus, and (ii) emphysema, which involves damage to the lungs over time. COPD also has significant extra-pulmonary effects, which include weight loss, nutritional abnormalities, skeletal muscle dysfunction, and it is also a major cause of psychological suffering. Further, COPD may promote heart failure because obstruction of the airways and damage to the lining of the lungs can result in abnormally low oxygen levels in the vessels inside the lungs. This creates excess strain on the right ventricle from pulmonary hypertension, which can result in heart failure.

In developed countries, the biggest risk factor for the development of COPD is cigarette smoking, whereas indoor pollutants are the major risk factor for the disease in developing nations. Not all smokers develop COPD and the reasons for disease susceptibility in these individuals have not been fully elucidated. Although the mechanisms underlying COPD remain poorly understood, the disease is associated with chronic inflammation, which is usually corticosteroid resistant, destruction of the airways, and lung parenchyma (functional tissue). There is no cure for COPD, but it is sometimes partially reversible with the administration of inhaled long-acting bronchodilators, and its progression can be slowed through smart maintenance therapy, in particular a cessation of smoking. People with stage 1 or 2 COPD lose at most a few years of life expectancy at age 65 compared with persons with no lung disease, in addition to any years lost due to smoking. Current smokers with stage 3 or 4 COPD lose about 6 years of life expectancy, in addition to the almost 4 years lost due to smoking.
 
The economic burden of COPD is vast and increasing, with attributed costs for hospitalizations, loss of productivity, and disability, in addition to medical care. In 2010, the condition’s annual cost in the US alone was estimated to be approximately US$50bn, which includes $20bn in indirect costs, and $30bn in direct health care expenditures. COPD treatment costs the UK more than £1.9bn each year. Over the past decade in the UK progress in tracking the disease has stagnated, and there is a wide variation in the quality of care.

 
Prevalence

The prevalence of COPD has increased dramatically due to a combination of aging populations, higher smoking prevalence, changing lifestyles and environmental pollution. In developed economies, COPD affects an estimated 8 to 10% of the adult population, 15 to 20% of the smoking population, and 50 to 80% of lung cancer patients with substantial smoking histories. For many years, COPD was considered to be a disease of developed nations, but its prevalence is increasing significantly in developing countries, where almost 90% of COPD deaths occur. Even though most of the research data on COPD comes from developed countries, accurate epidemiologic data on the condition are challenging and expensive to collect. There is a dearth of systematically collected COPD prevalence data from developing nations, and a paucity of COPD studies in Africa, SE Asia and the Eastern Mediterranean region. Most of the available prevalence estimates from low- to middle-income countries are not based on spirometry testing (the internationally accepted gold standard for the diagnosis of COPD, which measures lung capacity). Hence, the available COPD data from developing countries cannot be interpreted reliably in a global context, and more data from these regions are necessary to extend and support further studies.

 

Mortality
 
COPD is one of the three leading contributors to respiratory mortality in developed countries, along with lung cancer and pneumonia.  Globally, it is estimated that 3m deaths were caused by COPD in 2015, which is 5% of all deaths globally in that year. The 5-year mortality rate for people with COPD typically ranges from 40 to 70%, depending on disease severity, while the 2-year mortality rate for people with severe COPD is about 50%, which is worse than those for people with many common cancers. India and China account for 66% of the global COPD mortality with 33% of the world’s human population. Further, it has been estimated that COPD associated mortality is likely to grow by 160% in SE Asia in the coming decades, where COPD research and data are sparse.  

You might also be interested in:


Slowing the steep rise of antimicrobial resistance

 
 

 

Risk factors

Air pollution
Air pollution is a risk factor for COPD and other respiratory disorders. According to a 2016 World Health Organization report, about 92% of the world’s population is exposed to dirty air. The Commission on Air pollution and Health, which is the most comprehensive global analysis to-date, and published in The Lancet in October 2017, suggests each year air pollution kills over 9m people prematurely, and costs US$4.6tn, which is equivalent to more than 6% of global GDP.

Tobacco smoke
In advanced industrial economies exposure to tobacco smoke is the number one risk factor in developing COPD, where cigarette smoking is linked to 80% of all COPD deaths. In the US, for instance, approximately 25% of the adult population continue to smoke, despite aggressive smoking prevention and cessation efforts. Each year COPD claims some 134,700 American lives, and COPD is the 4th leading cause of death in the US, and expected to be the 3rd by 2020.

Biomass fuels
In developing economies COPD burden is caused more by exposure to indoor air pollution, such as the use of biomass fuels for cooking and heating. Almost 3bn people worldwide use biomass and coal as their main source of energy for cooking, heating, and other household needs. In these communities biomass fuels are often burned inefficiently in open fires, leading to high levels of indoor air pollution responsible for a greater degree of COPD risk than smoking or outdoor air pollution. Biomass fuels account for the high prevalence of COPD among non-smoking women in parts of the Middle East, Africa and Asia, where indoor air pollution is estimated to kill 2m women and children each year. COPD research and data from these regions are sparse.  

Genetics
In some people, COPD is caused by a genetic condition known as alpha-1 antitrypsin deficiency (AATD). People with AATD do not make a type of protein that helps to protect the lungs. Because not all individuals with COPD have AATD, and because some individuals with COPD have never smoked, it is suggested that there are other genetic predispositions to developing COPD. AATD is not a common cause of COPD, and few people know they have the genetic condition. In the US for example, it is estimated that only about 100,000 people have AATD.
 
Symptoms and diagnosis
 
The typical symptoms of COPD are cough, excess sputum production, and dyspnea (difficulty breathing), recurring respiratory infections, and fatigue. Because symptoms develop relatively slowly, sometimes people are unaware that they have lung problems. People with COPD are diagnosed by way of a multifactorial assessment that includes; spirometry, clinical presentation, symptomatology, and risk factors.
 
COPD management

The heterogeneous nature of COPD, and the fact that it affects different people differently, and different therapies impact the condition differently, presents challenges for clinicians. There are several types of drugs, which can be used for the condition based on whether the drug is intended to improve airflow obstruction, provide symptom relief, modify or prevent exacerbations, (a worsening of symptoms often precipitated by infection), or alter the progression of the disease. It is possible that a drug may affect only one aspect of the condition or that it may act on many. It is also possible that a drug may benefit COPD patients in other meaningful ways.

View from a leading pulmonologist
Some treatments for COPD overlap with asthma,” says Murali Mohan, Consultant Pulmonologist from Narayana Health City in Bangaluru, India.  “The foundation for treating COPD is inhaled long-acting bronchodilators, whereas corticosteroids are beneficial primarily in patients who have coexisting features of asthma, such as eosinophilic inflammation and more reversibility of airway obstruction.  . . . An important part of COPD management is for smokers to stop, and to reduce a patient’s exposure to pollutants both in the home and at work. Vaccines are used to prevent serious infections . . . . . .  People with COPD tend to eat less, and become breathless when they eat. There is a lot of systemic inflammation, which causes patients to lose weight, but being overweight is just as bad. So we ensure that COPD patients adopt a healthy diet and exercise. This is to obtain an ideal body weight, and to supplement muscle strength, which is very important because it’s the muscles that move the lungs and gets the air in and out of the chest . . . . Often we recommend psychotherapy because a lot of people with COPD are depressed. More research is needed to better understand the conditions mechanisms, and to develop new treatments that reduce disease activity and progression,” says Mohan, see videos below.
 
What are the treatments for COPD?
 
 COPD market and changing treatment landscape
 
Given the vast and escalating global prevalence of COPD, the market for therapies is also huge, global, and rapidly growing, and giant pharmaceutical companies aggressively compete for market share. The current size of the COPD market is estimated to be US$17bn. The overall respiratory therapeutics market, which in addition to COPD, includes, asthma, idiopathic pulmonary fibrosis (IPF), and cystic fibrosis, is about US$30bn and projected to grow to US$47bn by 2022. Currently, there are some 900 drugs in development for all types of respiratory disorders. The sheer size and rate of growth of this market, plus the fact that there is still no definitive treatment for COPD, motivates pharmaceutical companies to commit millions to its research. Notwithstanding, the overwhelming majority of current research data are derived from a relatively narrow band of developed nations.
 
COPD research

Influence of cigarette smoking on COPD research
For many years COPD research concentrated on the condition’s association with cigarette smoking. This led to the early discovery that a subgroup of patients with emphysema was genetically deficient in an inhibitor of an enzyme that breaks down proteins and peptides. Although this explanation captures key elements of COPD, it has neither led to a reduction in its prevalence or morbidity, nor to the development of any therapy proven to modify the disease process itself, or to an adequate understanding of how risk factors other than cigarette smoking may contribute to COPD pathogenesis.
 
Biologics
Although research has improved and our understanding of COPD has advanced, there remain challenges for researchers. Contributing to these is a broader array of mechanisms implicated in COPD’s pathogenesis compared to many other respiratory disorders. Notwithstanding, there has been a determined focus on a range of targeted biologic agents as potential therapies for the condition, which has led to an improved understanding of the pathophysiology and clinical manifestations of COPD; and the increased awareness of the importance of inflammation.
 
Although, innovative sampling techniques have led to the identification of several pulmonary biomarkers, (measurable substances that signal the presence of disease in the blood or tissues), which potentially could provide an enhanced insight into the pathophysiological mechanisms of exacerbation, sampling methods still could be improved because the utility of current methods is not yet established, and they have yet to provide compelling data in support of their use in COPD. This suggests a need for more research directed toward identifying the bases of COPD exacerbations, and clarifying the pathophysiological processes that contribute to worsening of symptoms. Other research studies focus on the underlying genetics of COPD in order to find better ways of identifying which smokers are more likely to develop COPD.
 
Challenge of COPD data
 
When recruiting patients for COPD studies, it is impossible to determine the speed at which the lung function will deteriorate in any given individual. This raises methodological challenges particularly with regard the size and nature of a cohort at the beginning and end of a study. Further, longitudinal studies require regular, and systematic collection of patient data, which may be a combination of self-reporting, electronic patient records (EPR), and results of tests undertaken by health professionals. Collecting longitudinal patients’ perceptions of the status of their COPD from a dispersed patient cohort is challenging because of different distributions of the disease, and the variation in the availability and quality of significant events, such as exacerbations.
 
Self-management

More recently, apps have been developed to encourage the self-management of COPD, but they are also potentially helpful for research. This is because apps are able to unobtrusively enter the daily lives of people with COPD. However, the utility of apps as research aids is limited because rarely are they configured to aggregate, export and share the data they collect. However, this is changing.

The large and rapid growth of the health-related apps market, and the impact it has on shaping the attitudes and expectations of millions of people about healthcare, suggests that the utility of such devices to support clinical research will increase. Helpful in this regard is the fact that apps are being configured to enable rapid remote tests, and collect, transmit, store and analyse data.
 
Data validity and patient compliance
Notwithstanding, two significant challenges associated with apps remain for COPD researchers. One concerns the technological adequacy of apps to consistently produce valid data, and another is the compliance of patients in COPD studies. Both of these concerns however are being addressed.
 
Validation
A study, published in 2017 in the journal Nature Biotechnology, provides some validation for data derived from apps to be used in clinical studies. Scientists developed an app to collect survey data from 7,600 asthma sufferers over a 6-month period on how they managed their condition. Researchers then compared these app-generated patient-reported data with similar data from traditional asthma studies and found that there were no significant differences. Although there still remains some methodological challenges associated with using apps to recruit patients for clinical studies, findings from this and other studies give scientists some degree of confidence that app-derived data can be reliable enough for clinical studies.
 
Giant tech companies and medical research
The increasing validation of app-generated health data is driving the growth in pairing wireless health apps with data monitoring, and creating an opportunity for giant global technology companies to enter the healthcare market by joint venturing with big pharmaceutical companies. Such ventures create big-data opportunities to aggregate vast amounts of patient data from millions of COPD sufferers and the efficacy of specific drugs. Such ventures also allow patients remotely to keep track of their drug usage, and for health professionals to instantly access the data to monitor an individual patient’s condition.
 
Compliance
There is some evidence to suggest that people with COPD are less compliant recording information about their condition when they are experiencing an exacerbation, or just not feeling well. A solution might be to employ techniques, which “nudges” patients to be more compliant. The genesis of nudge systems is a 2008 publication, Nudge, by US academics Cass Sunstein and Richard Thaler. The authors suggest that making small changes to the way options are presented to individuals “nudges” them to engage in behaviours that they would not normally do. Following the publication of the book, “nudge units” were set up in the White House and in 10 Downing Street to encourage people to change entrenched behaviours in order to improve occasional and unsystematic public services, while reducing costs.

The UK’s Nudge Unit has, among other things, significantly increased the rate of organ donation, and encouraged a substantial number of individuals to initiate and maintain healthier lifestyles. Minded of the successes, governments throughout the world have set up nudge units. A 2017 OEDC report suggests that nudge units have entered the mainstream,  and could be used much more widely. Also in 2017, Richard Thaler was awarded the Nobel Prize for his contribution to behavioural economics. COPD researchers might consider replacing the current “pull” techniques with nudge techniques to enhance patient compliance in COPD clinical studies.
 
Takeaways

For years COPD research was in the doldrums, but over the past decade things have changed significantly. Notwithstanding, COPD studies could benefit from more compelling data, and this could be achieved by employing smart online communications strategies that increase the connectivity of globally dispersed COPD researchers and individuals living with the condition with an eye to enhance patient compliance in COPD studies, increase the quality of research data, and expand the geographies from which COPD data are retrieved.
view in full page