Tag

Tagged: commentary

Sponsored
  • A new gene editing study is poised on the cusp of medical history because it holds out the prospect of providing a cure for hemophilia
  • Hemophilia is a rare incurable life-threatening blood disorder
  • People with hemophilia have little or no protein needed for normal blood clotting
  • Severe forms of the disorder may result in spontaneous and excessive bleeding
  • In recent history many people with hemophilia died before they reached adulthood because of the dearth of effective treatments
  • A breakthrough therapy in the 1980s was contaminated with deadly viruses
 
A cure for hemophilia?

A study led by researchers from Barts Health NHS Trust and Queen Mary University London and published in a 2017 edition of the New England Journal of Medicine has made a significant step forward towards finding a cure for hemophilia A, a rare incurable life threatening-blood disorder, which is caused by the failure to produce certain proteins required for blood clotting. In recent history only a few people with hemophilia survived into adulthood. This was because of the dearth of effective treatments and any small cut or internal hemorrhaging after even a minor bruise was often fatal.
 
The royal disease

There are 2 main types of hemophilia: A and B.  Both are rare congenital bleeding disorders sometimes referred to as “the royal disease,” because in the 19th and 20th centuries hemophilia affected European royal families. Queen Victoria of England is believed to have been a carrier of hemophilia B, a rarer condition than hemophilia A. 2 of Victoria’s 5 daughters (Alice and Beatrice) were also carriers.  Through marriage they passed on the mutation to various royal houses across Europe including those of Germany, Russia and Spain. Victoria’s son Prince Leopold was diagnosed with hemophilia A when he was a child. He died at 31 and throughout his life had a constant staff of doctors around him.
 
Epidemiology

The worldwide incidence of hemophilia A is about 1 in 5,000 males, with approximately 33% of affected individuals not having a family history of the disorder, which in their cases result from a new mutation or an acquired immunologic process. Only 25% of people with hemophilia receive adequate treatment; most of these are in developed nations. In 2016 there were some 7,700 people diagnosed with the condition in the UK, 2,000 of whom had a severe form with virtually no blood clotting protein. In the US there are some 20,000 people living with the disorder. Morbidity and death from hemophilia are primarily the result of haemorrhage, although HIV and hepatitis infections became prominent in patients who received therapies with contaminated blood products prior to the mid-1980s: see below.
 
Etiology
Hemophilia A and B are similar disorders. Both are caused by an inherited or acquired genetic mutation, which reduces or eliminates the coagulation genes referred to as factor VIII for hemophilia A, and factor IX for hemophilia B. Factors VIII and IX are essential blood clotting proteins, which work with platelets to stop or control bleeding. The amount of the protein present in your blood and its activity determines the severity of symptoms, which range from mild to severe. Factors VIII and IX deficiencies are the best-known and most common types of hemophilia, but other clotting factor deficiencies also exist. Factors VIII and IX are encoded in genes and located on the X chromosomes, which come in pairs. Females have 2 X chromosomes, while males have 1 X and 1 Y chromosome. Only the X chromosome carries the genes related to clotting factors. A male who has a hemophilia gene on his X chromosome will have hemophilia. Since females have 2 X chromosomes, a mutation must be present in both copies of the gene to cause the hemophilia. When a female has a hemophilia gene on only 1 of her X chromosomes, she is a "carrier” of the disorder and can pass the gene to her children. Sometimes carriers have low levels of a clotting factor and therefore have symptoms of hemophilia, including bleeding.

 

Hemophilia A and B

Hemophilia A and B affect all races and ethnic groups equally. Hemophilia B is the second most common type of hemophilia and is less common than factor VIII deficiency. Notwithstanding, the result is the same for people with hemophilia A and B: they both bleed more easily and for a longer time than usual. The differences between hemophilia A and B are in the factor that is either missing or at a low level. The treatments to replace factors A and B are different. Hemophilia A needs to be treated with factor VIII, and hemophilia B with factor IX. Giving factor VIII to someone with hemophilia B will not help to stop the bleeding.
You might also be interested in:
 
 
Mild to severe hemophilia

People with mild hemophilia have few symptoms on a day-to-day basis, but may bleed excessively for example during surgery, whilst those with a severe form of the disorder may have spontaneous bleeds. Severe hemophilia tends to be diagnosed in childhood or as part of screening in families known to have bleeding disorders. People who do not have hemophilia have a factor VIII activity of 100%, whereas people who have severe hemophilia A have a factor VIII activity of less than 1%. In severe forms, even the slightest injury can result in excessive bleeding as well as spontaneous internal bleeding, which can be life threatening. Also, the pressure of massive bleeding into joints and muscles make hemophilia one of the most painful diseases known to medicine. Without adequate treatment, many people with hemophilia die before they reach adulthood. However, with effective replacement therapy, life expectancy is about 10 years less than that of males without hemophilia, and children can look forward to a normal life expectancy. Replacement therapy entails concentrates of clotting factor VIII (for haemophilia A) or clotting factor IX (for haemophilia B) being slowly dripped or injected into a vein to help replace the clotting factors that are missing or low.
 
Brief history of treatments

In the 1950s and 60s fresh frozen plasma (FFP) was the principal therapy for hemophilia A and B. However, because each bag of FFP contained only very small amounts of the clotting agents, large amounts of plasma had to be transfused to stop bleeding episodes and people with the conditions had to be hospitalized. In some countries FFP is still the only product available for treating hemophilia.
 
In the mid-1960s Judith Pool, an American scientist, made a significant advance in haemophilia therapy when she discovered that the sludge, which sank to the bottom of thawing plasma was rich in factor VIII (but not IX) and could be frozen and stored as “cryoprecipitate plasma”. This more concentrated clotting factor VIII became the preferred treatment for severe hemophilia A as it required smaller volumes and patients could receive treatment as outpatients. Notwithstanding cryoprecipitate is less safe from viral contamination than concentrates and is harder to store and administer.

 
The tainted blood scandal

In the early 1970s drug companies found they could take the clotting factors VIII and IX out of blood plasma and freeze-dry them into a powder. This quickly became the treatment of choice as it could be used to treat hemophilia at home. There was a huge demand for the new freeze-dried product, and drug companies distilled the plasma of large groups of donors, sometimes as many as 25,000, to meet the demand. This led companies seeking substantial supplies of blood to pay prisoners and others to give blood. Some donors were addicted to drugs and infected with the HIV virus and hepatitis C. By the early 1980s, human blood, plasma and plasma-derived products used in therapies for hemophilia were discovered to be transmitting potentially deadly blood-borne viruses, including hepatitis viruses and HIV. So the same advanced substance being used to treat people with hemophilia was also responsible for causing sufferers prolonged illnesses and premature death.
 
Infected hemophilia treatments in the UK

A report published in 2015 by a UK All Party Parliamentary Group on Haemophilia found that 7,500 people in Britain with the disorder were infected with the contaminated blood products. According to Tainted Blood, a group set up in 2006 to campaign on behalf of people with hemophilia, 4,800 people were infected with hepatitis C, a virus that causes liver damage and can be fatal. Of these, 1,200 were also infected with HIV, which can cause AIDS, and some 2,400 sufferers died prematurely.
 
A 2017 UK official enquiry
 
In 1991 the UK government made ex-gratia payments to hemophilia patients infected with HIV, averaging £60,000 each, on condition that they dropped further legal claims. The extent of infection with hepatitis C was not discovered until years later. Campaigners unearthed evidence suggesting that UK officials in the Department of Health knew or suspected that the imported factor concentrates were risky as early as 1983. Notwithstanding, NHS England is said to have continued to administer the contaminated concentrates to patients with hemophilia. In 2017 the UK government set up an inquiry into the NHS contaminated blood scandal.  
 
A new scientific era

In the early 1980s, soon after HIV was identified, another significant breakthrough occurred in the treatment of hemophilia when manufacturers used genetically engineered cells that carry a human factor gene (called recombinant products). Today, all commercially prepared factor concentrates are treated to remove or inactivate blood-borne viruses. Also, scientists have a better understanding of the etiology of the disease and are able to detect and measure its inhibitors and know how to eliminate them by manipulating the immune system.
 
A cure for haemophilia A

Researchers, led by John Pasi, Director of the Haemophilia Centre at Barts Health NHS Trust and Professor of Haemostasis and Thrombosis at Queen Mary University London, have successfully carried out the first gene editing study for hemophilia A. The study enrolled 13 patients across England and injected them with a copy of their missing gene, which allows their cells to produce the essential blood-clotting agent factor VIII. Researchers followed participants for up to 19 months, and findings showed that 85% had normal or near normal levels of the previously missing factor VIII clotting agent and all participants were able to stop their previously regular haemophilia A treatment: they were effectively cured.
 
Gene editing
Gene editing is particularly relevant for diseases such as hemophilia A where, until the recent UK study reported in this Commentary, there was no cure. Gene editing allows doctors to prevent and treat a disorder by inserting a healthy gene into a patient’s cells to replace a mutated or missing gene that causes the disease. The technique has risks and is still under consideration to ensure that it is safe and effective. In 2015, a group of Chinese scientists edited the genomes of human embryos in an attempt to modify the gene responsible for β-thalassemia, another potentially fatal blood disorder.

 
Expanding the study

According to Pasi, "We have seen mind-blowing results, which have far exceeded our expectations. When we started out we thought it would be a huge achievement to show a 5% improvement, so to actually be seeing normal or near normal factor levels with dramatic reduction in bleeding is quite simply amazing. We really now have the potential to transform care for people with haemophilia using a single treatment for people who at the moment must inject themselves as often as every other day." Pasi and his colleagues are expected to undertake further studies with participants from the USA, Europe, Africa and South America.
 
Takeaway

Hemophilia is a life-changing, often painful and debilitating disorder. In recent history there was a dearth of effective therapies and people with the disorder barely survived into adulthood.  More recent scientific advances that used concentrated blood products to improve treatment were contaminated with deadly viruses, which further destroyed the lives of sufferers, and in many cases led to their premature death. The study, undertaken by Pasi and his colleagues, is on the cusp of medical history because it has the potential to provide a cure for what has been an incurable life-changing disease. Notwithstanding, it is worth bearing in mind that scientific discovery is rarely quick and rarely proceeds in a straight line.
view in full page
  • In high-income countries populations are aging
  • By 2050 the world population of people over 60 is projected to reach 2bn
  • Age-related low back pain is the highest contributor to disability in the world
  • Over 80% of people will experience back pain at some point in their life
  • Older people with back pain have a higher chance of dying prematurely
  • The causes of back pain are difficult to determine which presents challenges for the diagnosis and management of the condition
  • The US $100bn-a-year American back pain industry is “ineffective
  • Each year 10,000 and 300,000 spine fusion surgeries are carried out in the UK and US respectively
  • 20% of spinal fusion surgeries are undertaken without good evidence
  • In 10 to 39% of spine surgery patients pain continues or worsens after surgeries
 
Age of the aged and low back pain
 
A triumph of 20th century medicine is that it has created the “age of the aged”. By 2050 the world population of people aged 60 and older is projected to be 2bn, up from 900m in 2015. Today, there are 125m people aged 80 and older and by 2050 there is expected to be 434m people in this age group worldwide. The average age of the UK population has reached 40. Some 22% will be over 65 by 2031, and this will exceed the percentage of the UK population under 25. 33% of people born today in the UK can expect to live to 100. However, this medical success is the source of rapidly increasing age-related disorders, which present significant challenges for the UK and other high-income nations. Low back pain (LBP) is the most common age-related pain disorder, and ranked as the highest contributor to disability in the world. 
 
At some point back pain affects 84% of all adults in developed economies. Research published in 2017 in the journal Scoliosis Spinal Disorders suggests that LBP is the most common health problem among older adults that results in pain and disability. The over 65s are the second most common age group to seek medical advice for LBP, which represents a significant and increasing workload for health providers. Each year back pain costs the UK and US Exchequers respectively some £5bn and more than US635bn in medical treatment and lost productivity. LBP accounts for 11% of the total disability of the respective populations. This Commentary discusses therapies for LBP, and describes the changing management landscape for this vast and rapidly growing condition.

 

Your spine and LBP

 

Your spine, which supports your back, consists of 24 vertebrae, bones stacked on top of one another.  At the bottom of your spine and below your vertebrae are the bones of your sacrum and coccyx. Threading through the entire length of your vertebrae is your spinal cord, which transmits signals from your brain to the rest of your body. Your spinal cord ends in your lower back, and continues as a series of nerves, which resemble a horse’s tail, hence its medical name, ‘cauda equine’. Between each vertebra are discs. In younger people discs contain a high degree of water. This gives them the ability to act like shock absorbers. During the normal aging process discs lose much of their water content and degenerate. Such degenerative spinal structures may result in a herniated disc when the disc nucleus extrudes through the disc’s outer fibres, or a compression of nerve roots, which may lead to radiculopathy. This is a condition more commonly known as sciatica, which is pain caused by compression of a spinal nerve root in the lower back that is often associated with the degeneration of an intervertebral disc, and can manifest itself as pain, numbness, or weakness of the buttock and outer side of the leg.

 

Challenges in diagnosis
 
Because your back is comprised of so many connected tissues, which include bones, muscles, ligaments, nerves, tendons, and joints, it is often difficult for doctors to say with confidence what causes back pain even with the help of X-rays and MRI scans. Usually, LBP does not have a serious cause. In the majority of cases LBP will reduce and often disappear within 4 to 6 weeks, and therefore can be self-managed by keeping mobile and taking over-the-counter painkillers. However, in a relatively small proportion of people with LBP, the pain and disability can persist for many months or even years. Once LBP has been present for more than a year few people return to normal activities. There is not sufficient evidence to suggest definitive management pathways for this group that accounts for the majority of the health and social costs associated with LBP.
 
Assessing treatment options for back pain

Ranjeev Bhangoo, a consultant neurosurgeon at Kings’ College Hospital Trust, London, and the London Neurosurgery Partnership describes the nature and role of intervertebral discs and how treatment options should be assessed.

When a person presents with a problem in the lower back, which might manifest as leg or arm pain, you need to ask 3 questions: (i) is the history of the pain compatible with a particular disc causing the problem?  (ii) Does an examination suggest that a particular disc is causing a problem? And (iii) does a scan show that the disc you thought was the problem is the problem? If all 3 answers align, then there maybe some good reason to consider treatment options. If the 3 answers are not aligned, be weary of a surgeon suggesting intervention because 90% of us will experience back pain at some point in our lives, and 90% of the population don’t need back surgery.”
 
 
Back pain requiring immediate medical attention
 
Although the majority of LBP tends to be benign and temporary, people should seek immediate medical advice if their back pain is associated with certain red flags such as loss of bladder control; loss of weight, fever, upper back or chest pain; or if there is no obvious cause for the pain; or if the pain is accompanied by weakness, loss of sensation or persistent pins and needles in the lower limbs. Also, people with chronic lifetime conditions such as cancer should pay particular attention to back pain.
 
Epidemiology of LBP

Back pain affects approximately 700m people worldwide. A 2011 report by the US Institute of Medicine, estimates that 100m Americans are living with chronic back pain, which is more than the total affected by heart disease, cancer, and diabetes combined. This represents a vast market for therapies that include surgery and the prescription of opioids. Estimates of the prevalence of LBP vary significantly between studies. There is no convincing evidence that age affects the prevalence of back pain, and published data do not distinguish between LBP that persists for more than, or less than, a year. Each year LBP affects some 33% of UK adults, and around 20% of these - about 2.8m - will consult their GP. One year after a first episode of back pain, 62% of people still experience pain, and 16% of those initially unable to work are not working after 1 year. Typically in about 60% of cases pain and disability improve rapidly during the first month after onset.

 

Non-invasive therapies for LBP

The most common non-invasive treatment for LBP is non-steroidal anti-inflammatory drugs (NSAIDs), but also other pain medication may include paracetamol, oral steroids, gabapentin/pregabalin, opioids and muscle relaxants, antidepressants, chiropractic manipulation, osteopathy, epidural injections, transcutaneous electrical nerve stimulation (TENS), ultrasound that uses vibration to deliver heat and energy to parts of the lower back, physiotherapy, massage, and acupuncture.
You might also be interested in:

Medical cannabis and modern healthcare
 

 
Prelude to surgery
 
Despite the range of non-invasive therapies for LBP, the incidence of lumbar spinal fusion surgery for ordinary LBP increased significantly over the past 2 decades without definitive evidence of the efficacy of the procedure. Recent guidelines from UK and US regulatory bodies have instructed doctors to consider more conservative therapies for the management of back pain, and this has resulted in the reduction in the incidence of spinal fusion surgeries.
 
Notwithstanding, because there has been clear recognition of the paucity of evidence for reliable rates of improvement following fusion for back pain surgery, it does not necessarily follow that fusions should never be done and indeed there are many instances where fusions are strongly supported by evidence. The gold standard for diagnosing degenerative disc disease is MRI evidence, which has formed the principal basis for surgical decisions in older adults. However, studies suggest that although MRI evidence indicates that degenerative change in the lumbar spine is common among people over 60, the overwhelming majority do not have chronic LBP.
 
Increasing prevalence of spinal fusion surgery
 
Each year, NHS England undertakes some 10,000 spinal surgeries for LBP at a cost of some £200m, which is in addition to the large and growing number of patients receiving epidurals that cost the NHS about £9bn a year, and they too have low evidence as to their efficacy. In the US more than 300,000 back surgeries are performed each year. In 10 to 39% of these cases, pain may continue or even worsen after surgery; a condition known as ‘failed back surgery syndrome’. In the US, about 80,000 new cases of failed back surgery syndrome are accumulated each year. Pain after back surgery is difficult to treat, and many patients are obliged to live with pain for the rest of their lives, which causes significant disability.
  
Back pain and premature death
 
A study by researchers from the University of Sydney published in 2017 in the European Journal of Pain found that older people with persistent chronic back pain have a higher chance of dying prematurely. The study examined the prevalence of back pain in nearly 4,400 Danish twins over 70. They then compared their findings with the death registry and concluded that, "Older people reporting spinal pain have a 13% increased risk of mortality per year lived, but the connection is not causal." According to lead author Matthew Fernandez, “This is a significant finding as many people think that back pain is not life-threatening.” Previous research has suggested that chronic pain can wear down peoples’ immune systems and make them more vulnerable to disease.
 
Spinal fusion
 
While recognizing that a relatively small group of elite spine surgeons, mostly from premier medical institutions, regularly carry out essential complex surgeries required for dire and paralysis-threating conditions such as traumatic injuries, spinal tumors, and congenital spinal abnormalities, the majority of procedures undertaken by a significant number of spine surgeons have been elective fusion procedures for people diagnosed with pain, which is referred to as “axial”, “functional” and “ non-specific”.  People most likely to benefit from spine surgery are the young, fit and healthy. This is according to a study undertaken by the American Spine Research AssociationNotwithstanding, the study also suggests that the typical American candidate for spinal fusion surgery is an overweight, over 55 year old smoker on opioids.
 
Steady growth projected for the spinal fusion market

The spine surgery market is relatively mature and dominated by a few global corporations: Medtronic, DePuy, Stryker, and Zimmer-Biomet. According to a 2017 report from the consulting firm GlobalData the market for spinal fusion, which includes spinal plating systems, interbody devices, vertebral body replacement devices, and pedicle screw systems is set to rise from approximately US$7bn in 2016 to US$9bn by 2023, representing a compound annual growth rate of 3.4%. The increasing prevalence of age-related degenerative spinal disorders, and continued technological advances in spinal fusion surgeries, such as expandable interbody cages and navigation systems, and the increased adoption of minimally invasive techniques, have driven this relatively steady market growth.
 
Spinal fusion surgery

Lumbar spinal fusion surgery has been performed for decades. It is a technique, which unites - fuses - 1 or more vertebrae to eliminate the motion between them. The procedure involves placing a bone graft around the spine, which, over time, heals like a fracture and joins the vertebrae together. The surgery takes away some spinal flexibility, but since most spinal fusions involve only small segments of the spine the surgery does not limit motion significantly.
 
Lumbar spinal fusion

Fusion using bone taken from the patient - autograft - has a long history of use, results in predictable healing, and currently is the “gold standard” source of bone for a fusion. One alternative is an allograft, which is cadaver bone that is typically acquired through a bone bank. In addition, several artificial bone graft materials have been developed, and include: (i) demineralized bone matrices (DBMs), which are created by removing calcium from cadaver bone. Without the mineral the bone can be changed into putty or a gel-like consistency and used in combination with other grafts. Also it may contain proteins that help in bone healing; (ii) bone morphogenetic proteins (BMPs), which are powerful synthetic bone-forming proteins that promote fusion, and have FDA approval for certain spine procedures, and (iii) ceramics, which are synthetic calcium/phosphate materials similar in shape and consistency to the patient’s own bone.
 
Different approaches to fusion surgery

Spinal fusion surgery can be either minimally invasive (MIS) or open. The former is easily marketable to patients because smaller incisions are often perceived as superior to traditional open spine surgery. Notwithstanding, open fusion surgery may be performed using surgical techniques that are considered "minimally invasive", because they require relatively small surgical incisions, and do minimal muscle or other soft tissue damage. After the initial incision, the surgeon moves the muscles and structures to the side to see your spine. The joint or joints between the damaged or painful discs are then removed, and then screws, cages, rods, or pieces of bone grafts are used to connect the discs and keep them from moving. Generally, MIS decreases the muscle retraction and disruption necessary to perform the same operation, in comparison to the traditional open spinal fusion surgery, although this depends on the preferences of individual surgeons. The indications for MIS are identical to those for traditional large incision surgery. A smaller incision does not necessarily mean less risk involved in the surgery.

There are three main approaches to fusion surgery, (i) the anterior procedure, which approaches your spine from the front and requires an incision in the lower abdomen, (ii) a posterior approach is done from your back, and (iii) a lateral approach from your side.

 
Difficulty identifying source of back pain
 
A major obstacle to the successful treatment of spine pain by fusion is the difficulty in accurately identifying the source of a patient’s pain. The theory is that pain can originate from spinal motion, and fusing the vertebrae together to eliminate the motion will get rid of the pain. Current techniques to precisely identify which of the many structures in the spine could be the source of a patient’s back pain are not perfect. Because it can be challenging to locate the source of pain, treatment of back pain alone by spinal fusion is somewhat controversial. Fusion under these conditions is usually viewed as a last resort and should be considered only after other nonsurgical measures have failed.
 
Spinal fusion surgery is only appropriate for a very small group of back pain sufferers

Nick Thomas, also a consultant neurosurgeon at King’s College Hospital Trust, London and the London Neurosurgery Partnership suggests there are a scarcity of preoperative tests to indicate whether spinal lumbar fusion surgery is appropriate, and stresses that spinal fusion is appropriate only for a small group of patients who present with back pain.
 
The overwhelming majority of patients who present with low back pain will be treated non operatively. In a few very select cases, spinal fusion may be appropriate. A challenge in managing low back pain is that there are precious few pre-operative investigations that give a clear indication of whether a spinal fusion may or may not work. Even with MRI evidence it can be very difficult to determine whether changes in a disc are the result of the normal process of degeneration or whether they reflect a problem that might be generating the back pain. If patients fail to respond to non-operative treatments they may well consider spinal fusion. A very small group of patients, who present with a small crack in one of the vertebrae bones - pars defect - or slippage of the vertebrae - spondylolisthesis - may favorably respond to spinal fusion. In patients where the cause of the back pain is less clear the success rate of spinal fusion is far less.” See video:
 
 
Back pain industry

In a new book entitled Crooked published in 2017, investigative journalist Cathryn Jakobson Ramin suggests that the US $100bn a year back pain industry is, “often ineffective, and sometimes harmful”. Ramin challenges the assumptions of a range of therapies for back pain, including surgery, epidurals, chiropractic methods, physiotherapy, and analgesics. She is particularly damning about lumbar spinal fusion surgery.  In the US 300,000 of such procedures are carried out each year at a cost of about $80,000 per surgery. Ramin suggests these have a success rate of 35%.
 
Over a period of 6 years Ramin interviewed spine surgeons, pain specialists, physiotherapists, and chiropractors. She also met with patients whose pain and desperation led them to make life-changing decisions. This prompted her to investigate evidence-based rehabilitation options and suggest how these might help back pain sufferers to avoid the range of current therapies, save time and money, and reduce their anxiety. According to Ramin people in pain are poor decision makers, and the US back pain industry exemplifies the worst aspects of American healthcare. But this is changing.
 
New Guidelines for LBP
 
In February 2017, the American College of Physicians published updated guidelines, which recommended surgery only as a last resort. Also, it said that doctors should avoid prescribing opioid painkillers for relief of back pain, and suggested that before patients try anti-inflammatories or muscle relaxants, they should try alternative therapies such as exercise, acupuncture, massage therapy or yoga. Doctors should reassure their patients that they would get better no matter what treatment they try. The guidelines also said that steroid injections were not helpful, and neither was paracetamol, although other over-the-counter analgesics such as aspirin or ibuprofen could provide some relief. The UK’s National Institute for Health and Care Excellence (NICE) has also updated its guidelines (NG59) for back pain management. These make it clear that in a significant proportion of back pain surgeries is not efficacious. The new guidelines instruct doctors to recommend various aerobic and biomechanical exercise, NHS England and private health insurers are changing their reimbursement policies. As a consequence the incidence of back surgeries have fallen significantly.
 
In perspective

Syed Aftab, a Consultant Spinal Orthopaedic Surgeon at the Royal London, Barts Health NHS Trust, welcomes the new guidelines, but warns that, “We should be careful that an excellent operation preformed by some surgeons on some patients does not get ‘vilified’. If surgeons stop preforming an operation because of the potential of being vilified, patients who could benefit from the procedure lose out”.
 
Surgical cycle

There seems to be a 20-year cycle for surgical procedures such as lumbar fusion. The procedure starts, some patients benefit and do well. This encourages more surgeons to carry out the procedure. Over time, indications become blurred, and the procedure is more widely used by an increasing number of surgeons. Not all patient do well. This leads to surgeons being scrutinized, some vilified, the procedure gets a bad name, surgeons stop preforming the operation, and patients who could benefit from the procedure lose out,” says Aftab, who is also a member of Complex Spine London, a team of spinal surgeons and pain specialists who focus on an evidence based multidisciplinary approach to spinal pathology.
 
Takeaway
 
LBP is a common disabling and costly health challenge. Although therapies are expensive, not well founded on evidence, and have a relatively poor success rate, their prevalence has increased over the past 2 decades, and an aging population does not explain this entirely. Although the prevalence of lumbar spinal fusion surgery has decreased in resent years, the spine has become a rewarding source of income for global spine companies, and also there have been allegations of conflicts of interest in this area of medicine. With the new UK and US guidelines the tide has changed, but ethical questions albeit historical still should be heeded.
view in full page
  • Everyone connected with healthcare supports interoperability saying it improves care, reduces medical errors and lowers costs
  • But interoperability is a long way from reality and electronic patient records are only part of an answer
  • Could Blockchain a technology disrupting financial systems resolve interoperability in healthcare?
  • Blockchain is an open-source decentralized “accounting” platform that underpins crypto currencies
  • Blockchain does not require any central data hubs, which in healthcare have been shown to be easily breached
  • Blockchain technology creates a virtual digital ledger that could automatically record every interaction with patient data in a cryptographically verifiable manner
  • Some experts believe that Blockchain could improve diagnosis, enhance personalised therapies, and prevent highly prevalent devastating and costly diseases
  • Why aren’t healthcare leaders pursuing Blockchain with vigour?
 
Why Blockchain technology will not disrupt healthcare

Blockchain technology is disrupting financial systems by enhancing the reconciliation of global transactions and creating an immutable audit trail, which significantly enhances the ability to track information at lower costs, while protecting confidentiality. Could Blockchain do something similar for healthcare and resolve the challenges of interoperability by providing an inexpensive and enhanced means to immutably track, store, and protect a variety of patient data from multiple sources, while giving different levels of access to health professionals and the public?
 
Blockchain and crypto currencies

You might not have heard of Blockchain, but probably you have heard of bitcoin; an intangible or crypto currency, which was created in 2008 when a programmer called Satoshi Nakamoto (a pseudonym) described bitcoin’s design in a paper posted to a cryptography e-mail list. Then in early 2009 Nakamoto released Blockchain: an open source, global decentralized accounting ledger, which underpins bitcoin by executing and immutably recording transactions without the need of a middleman. Instead of a centrally managed database, copies of the cryptographic balance book are spread across a network and automatically updated as transactions take place. Bitcoin gave rise to other crypto-currencies. Crypto currencies only exist as transactions and balances recorded on a public ledger in the cloud, and verified by a distributed group of computers.
 
Broad support for interoperability
 
Just about everyone connected with healthcare - clinicians, providers, payers, patients and policy makers - support interoperability, suggesting data must flow rapidly, easily and flawlessly through healthcare ecosystems to reduce medical errors, improve diagnosis, enhance patient care, and lower costs. Despite such overwhelming support, interoperability is a long way from a reality. As a result, health providers spend too much time calling other providers about patient information, emailing images and records, and attempting to coordinate care efforts across disjointed and disconnected healthcare systems. This is a significant drain on valuable human resources, which could be more effectively spent with patients or used to remotely monitor patients’ conditions. Blockchain may provide a solution to challenges of interoperability in healthcare.
 
Electronic patient records do not resolve interoperability

A common misconception is that electronic patient records (EPR) resolve interoperability. They do not. EPRs were created to coordinate patient care inside healthcare settings by replacing paper records and filing cabinets. EPRs were not designed as open systems, which can easily collect, amalgamate and monitor a range of medical, genetic and personal information from multiple sources. To realize the full potential and promise of interoperability EPRs need to be easily accessible digitally, and in addition, have the capability to collect and manage remotely generated patient healthcare data as well as pharmacy and prescription information; family-health histories; genomic information and clinical-study data. To make this a reality existing data management conventions need to be significantly enhanced, and this is where Blockchain could help.

 

Blockchain will become a standard technology
 
Think of a bitcoin, or any other crypto currency, as a block capable of storing data. Each block can be subdivided countless times to create subsections. Thus, it is easy to see that a block may serve as a directory for a healthcare provider. Data recorded on a block can be public, but are encrypted and stored across a network. All data are immutable except for additions. Because of these and other capabilities, it seems reasonable to assume that Blockchain may become a standard technology over the next decade.
 
You might also be interested in:

The IoT and healthcare  
 
and

Future healthcare shock

Blockchain and healthcare

Because crypto currencies are unregulated and sometimes used for money laundering, they are perceived as “shadowy”. However, this should not be a reason for not considering Blockchain technology. 30 corporations, including J.P. Morgan and Microsoft, are uniting to develop decentralized computing networks based on Blockchain technology. Further crypto currencies are approaching the mainstream,  and within the financial sector, there is significant and growing interests in Blockchain technology to improve interoperability. Financial services and healthcare have similar interoperability challenges, but health providers appear reluctant to contemplate fundamental re-design of EPRs; despite the fact that there is a critical need for innovation as genomic data and personalized targeted therapies rise in significance and require advanced data management capabilities. Here are 2 brief examples, which describe how Blockchain is being used in financial services.
 
Blockchain’s use in financial services
 
In October 2017, the State Bank of India (SBI) announced its intention to implement Blockchain technology to improve the efficiency, transparency, security and confidentiality of its transactions while reducing costs. In November 2017, the SBI’s Blockchain partner, Primechain Technologies suggested that the key benefits of Blockchain for banks include, “Greatly improved security, reduced infrastructure cost, greater transparency, auditability and real-time automated settlements.”
 
Dubai, a global city in the United Arab Emirates, is preparing to introduce emCash as a crypto currency, and could become the world’s first Blockchain government by 2020. The changes Dubai is implementing eventually will lead to the end of traditional banking. Driving the transformation is Nasser Saidi, chief economists of the Dubai International Financial Centre, a former vice-governor of the Bank of Lebanon and a former economics and industry minister of that country. Saidi perceives the benefits of Blockchain to include the phasing out of costly traditional infrastructure services such as accounting and auditing.

 
Significant data challenges

Returning to healthcare, there are specific challenges facing interoperability, which include: (i) how to ensure patient records remain secure and are not lost or corrupted given that so many people are involved in the healthcare process for a single patient, and communication gaps and data-sharing issues are pervasive, and (ii) how can health providers effectively amalgamate and monitor genetic, clinical and personal data from a variety of sources, which are required to improve diagnosis, enhance treatments and reduce the burden of devastating and costly diseases. 
 
Vulnerability of patient data

Not only do EPRs fail to resolve these two basic challenges of interoperability they are vulnerable to cybercriminals. Recently there has been an epidemic of computer hackers stealing EPRs. In June 2016 a hacker claimed to have obtained more than 10m health records, and was alleged to be selling them on the dark web. Also in 2016 in the US there were hundreds of breaches involving millions of EPRs, which were reported to the Department of Health and Human Services. The hacking of 2 American health insurers alone, Anthem and Premera Blue Cross, affected some 90m EPRs.
 
In the UK, patient data and NHS England’s computers are no less secure. On 12 May 2017, a relatively unsophisticated ransomware called WannaCry, infected NHS computers and affected the health service’s ability to provide care to patients. In October 2017, the National Audit Office (NAO) published a report on the impact of WannaCry, which found that 19,500 medical appointments were cancelled, computers at 600 primary care offices were locked and five hospitals had to divert ambulances elsewhere. Amyas Morse, head of the NAO suggests that, “The NHS needs to get their act together to ensure the NHS is better protected against future attacks.”

 
Healthcare legacy systems
 
Despite the potential benefits of Blockchain to healthcare, providers have not worked out fully how to move on from their legacy systems and employ innovative digital technologies with sufficient vigour to effectively enhance the overall quality of care while reducing costs. Instead they tinker at the edges of technologies, and fail to learn from best practices in adjacent industries.  
 
“Doctors and the medical community are the biggest deterrent for change”
 
Devi Shetty, heart surgeon, founder, and Chairperson of Narayana Health articulates this failure“Doctors and the medical community are the biggest deterrent for the penetration of innovative IT systems in healthcare to improve patient care . . . IT has penetrated every industry in the world with the exception of healthcare. The only IT in patient care is software built into medical devices, which doctors can’t stop. Elsewhere there is a dearth of innovative IT systems to enhance care,” see video. Notwithstanding, Shetty believes that, “The future of healthcare is not going to be an extension of the past. The next big thing in healthcare is not going to be a new drug, a new medical device or a new operation. It is going to be IT.”
 
 
Google, Blockchain and healthcare
 
Previous HealthPad Commentaries have suggested that the failure of healthcare providers to fully embrace innovative technologies, especially those associated with patient data, has created an opportunity for giant technology companies to enter the healthcare sector, which shall dis-intermediate healthcare professionals.

In May 2017, Google announced that its AI-powered subsidiary, DeepMind Health, intends to develop the “Verifiable Data Audit”, which uses Blockchain technology to create a digital ledger, which automatically records every interaction with patient data in a cryptographically verifiable manner. This is expected to significantly reduce medical errors since any change or access to the patient data is visible, and both healthcare providers and patients would be able to securely track personal health records in real-time.

 
Takeaways

Blockchain is a new innovative and powerful technology that could play a significant role in overcoming the challenges of interoperability in healthcare, which would significantly help to enhance the quality of care, improve diagnosis, reduce costs and prevent devastating diseases. However, even if Blockchain were the perfect technological solution, which enabled interoperability, change would not happen in the short term. As Max Planck said, “A new scientific innovation does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” While we wait for those who control our healthcare systems to die, billions of people will continue to suffer from preventable lifetime diseases, healthcare costs will escalate, healthcare systems will go bankrupt, and productivity in the general economy will fall.
view in full page
  • Chronic obstructive pulmonary disease (COPD) is a lung condition, which makes it hard to breathe, but is often preventable and treatable   
  • COPD affects some 210m people worldwide, its prevalence is increasing, and it costs billions in treatment and lost production
  • By 2020 COPD is projected to be the 3rd leading cause of death worldwide
  • Recently, scientific advances have benefitted COPD research
  • But COPD researchers are challenged to provide compelling data in support of their studies
  • COPD research would benefit from smart online communications strategies
  • This could strengthen collaboration among globally dispersed scientists and people living with COPD, and expand the geographies from which COPD data are retrieved
  
Chronic Obstructive Pulmonary Disease (COPD) and the battle for breath
 
Chronic Obstructive Pulmonary Disease (COPD) is a common, preventable and treatable disorder, which affects 210m people worldwide. Its prevalence is increasing globally, and each year it causes some 3m deaths. Although COPD therapies have improved substantially in recent years, and benefit from advancing science, researchers are still challenged to provide compelling data in support of their studies. There is no definitive treatment for COPD, and more research is needed to improve the condition’s clinical management. There are regions of the world where the prevalence of COPD is increasing significantly, but where information about the disorder is sparse. This Commentary suggests that COPD research could benefit by enhancing the connectivity of globally dispersed scientists and people living with the disorder, and expanding the geographies from where COPD data are retrieved. Before suggesting ways to achieve this, let us describe COPD, and its vast and escalating burden.
 
Chronic Obstructive Pulmonary Disease (COPD)
 
COPD is an umbrella term used to describe common progressive lifetime diseases, which damage the lungs and airways, and make breathing difficult. Its prevalence is increasing especially in developing countries. It is the 4th leading cause of death worldwide and projected to be the 3rd by 2020. The causes of COPD are well known, but the nature of the condition is still not fully understood even though COPD therapies have improved significantly in recent years. The effects of COPD are persistent and progressive, but treatment can relieve symptoms, improve quality of life and reduce the risk of death. COPD impacts people differently, medications affect patients differently, and such differences make it challenging for doctors to identify patients who are at risk of a more rapidly progressing condition.

Although COPD is complex with different etiologies, pathogens and physiological effects, there are two main forms: (i) chronic bronchitis, which involves a long-term cough with mucus, and (ii) emphysema, which involves damage to the lungs over time. COPD also has significant extra-pulmonary effects, which include weight loss, nutritional abnormalities, skeletal muscle dysfunction, and it is also a major cause of psychological suffering. Further, COPD may promote heart failure because obstruction of the airways and damage to the lining of the lungs can result in abnormally low oxygen levels in the vessels inside the lungs. This creates excess strain on the right ventricle from pulmonary hypertension, which can result in heart failure.

In developed countries, the biggest risk factor for the development of COPD is cigarette smoking, whereas indoor pollutants are the major risk factor for the disease in developing nations. Not all smokers develop COPD and the reasons for disease susceptibility in these individuals have not been fully elucidated. Although the mechanisms underlying COPD remain poorly understood, the disease is associated with chronic inflammation, which is usually corticosteroid resistant, destruction of the airways, and lung parenchyma (functional tissue). There is no cure for COPD, but it is sometimes partially reversible with the administration of inhaled long-acting bronchodilators, and its progression can be slowed through smart maintenance therapy, in particular a cessation of smoking. People with stage 1 or 2 COPD lose at most a few years of life expectancy at age 65 compared with persons with no lung disease, in addition to any years lost due to smoking. Current smokers with stage 3 or 4 COPD lose about 6 years of life expectancy, in addition to the almost 4 years lost due to smoking.
 
The economic burden of COPD is vast and increasing, with attributed costs for hospitalizations, loss of productivity, and disability, in addition to medical care. In 2010, the condition’s annual cost in the US alone was estimated to be approximately US$50bn, which includes $20bn in indirect costs, and $30bn in direct health care expenditures. COPD treatment costs the UK more than £1.9bn each year. Over the past decade in the UK progress in tracking the disease has stagnated, and there is a wide variation in the quality of care.

 
Prevalence

The prevalence of COPD has increased dramatically due to a combination of aging populations, higher smoking prevalence, changing lifestyles and environmental pollution. In developed economies, COPD affects an estimated 8 to 10% of the adult population, 15 to 20% of the smoking population, and 50 to 80% of lung cancer patients with substantial smoking histories. For many years, COPD was considered to be a disease of developed nations, but its prevalence is increasing significantly in developing countries, where almost 90% of COPD deaths occur. Even though most of the research data on COPD comes from developed countries, accurate epidemiologic data on the condition are challenging and expensive to collect. There is a dearth of systematically collected COPD prevalence data from developing nations, and a paucity of COPD studies in Africa, SE Asia and the Eastern Mediterranean region. Most of the available prevalence estimates from low- to middle-income countries are not based on spirometry testing (the internationally accepted gold standard for the diagnosis of COPD, which measures lung capacity). Hence, the available COPD data from developing countries cannot be interpreted reliably in a global context, and more data from these regions are necessary to extend and support further studies.

 

Mortality
 
COPD is one of the three leading contributors to respiratory mortality in developed countries, along with lung cancer and pneumonia.  Globally, it is estimated that 3m deaths were caused by COPD in 2015, which is 5% of all deaths globally in that year. The 5-year mortality rate for people with COPD typically ranges from 40 to 70%, depending on disease severity, while the 2-year mortality rate for people with severe COPD is about 50%, which is worse than those for people with many common cancers. India and China account for 66% of the global COPD mortality with 33% of the world’s human population. Further, it has been estimated that COPD associated mortality is likely to grow by 160% in SE Asia in the coming decades, where COPD research and data are sparse.  

You might also be interested in:


Slowing the steep rise of antimicrobial resistance

 
 

 

Risk factors

Air pollution
Air pollution is a risk factor for COPD and other respiratory disorders. According to a 2016 World Health Organization report, about 92% of the world’s population is exposed to dirty air. The Commission on Air pollution and Health, which is the most comprehensive global analysis to-date, and published in The Lancet in October 2017, suggests each year air pollution kills over 9m people prematurely, and costs US$4.6tn, which is equivalent to more than 6% of global GDP.

Tobacco smoke
In advanced industrial economies exposure to tobacco smoke is the number one risk factor in developing COPD, where cigarette smoking is linked to 80% of all COPD deaths. In the US, for instance, approximately 25% of the adult population continue to smoke, despite aggressive smoking prevention and cessation efforts. Each year COPD claims some 134,700 American lives, and COPD is the 4th leading cause of death in the US, and expected to be the 3rd by 2020.

Biomass fuels
In developing economies COPD burden is caused more by exposure to indoor air pollution, such as the use of biomass fuels for cooking and heating. Almost 3bn people worldwide use biomass and coal as their main source of energy for cooking, heating, and other household needs. In these communities biomass fuels are often burned inefficiently in open fires, leading to high levels of indoor air pollution responsible for a greater degree of COPD risk than smoking or outdoor air pollution. Biomass fuels account for the high prevalence of COPD among non-smoking women in parts of the Middle East, Africa and Asia, where indoor air pollution is estimated to kill 2m women and children each year. COPD research and data from these regions are sparse.  

Genetics
In some people, COPD is caused by a genetic condition known as alpha-1 antitrypsin deficiency (AATD). People with AATD do not make a type of protein that helps to protect the lungs. Because not all individuals with COPD have AATD, and because some individuals with COPD have never smoked, it is suggested that there are other genetic predispositions to developing COPD. AATD is not a common cause of COPD, and few people know they have the genetic condition. In the US for example, it is estimated that only about 100,000 people have AATD.
 
Symptoms and diagnosis
 
The typical symptoms of COPD are cough, excess sputum production, and dyspnea (difficulty breathing), recurring respiratory infections, and fatigue. Because symptoms develop relatively slowly, sometimes people are unaware that they have lung problems. People with COPD are diagnosed by way of a multifactorial assessment that includes; spirometry, clinical presentation, symptomatology, and risk factors.
 
COPD management

The heterogeneous nature of COPD, and the fact that it affects different people differently, and different therapies impact the condition differently, presents challenges for clinicians. There are several types of drugs, which can be used for the condition based on whether the drug is intended to improve airflow obstruction, provide symptom relief, modify or prevent exacerbations, (a worsening of symptoms often precipitated by infection), or alter the progression of the disease. It is possible that a drug may affect only one aspect of the condition or that it may act on many. It is also possible that a drug may benefit COPD patients in other meaningful ways.

View from a leading pulmonologist
Some treatments for COPD overlap with asthma,” says Murali Mohan, Consultant Pulmonologist from Narayana Health City in Bangaluru, India.  “The foundation for treating COPD is inhaled long-acting bronchodilators, whereas corticosteroids are beneficial primarily in patients who have coexisting features of asthma, such as eosinophilic inflammation and more reversibility of airway obstruction.  . . . An important part of COPD management is for smokers to stop, and to reduce a patient’s exposure to pollutants both in the home and at work. Vaccines are used to prevent serious infections . . . . . .  People with COPD tend to eat less, and become breathless when they eat. There is a lot of systemic inflammation, which causes patients to lose weight, but being overweight is just as bad. So we ensure that COPD patients adopt a healthy diet and exercise. This is to obtain an ideal body weight, and to supplement muscle strength, which is very important because it’s the muscles that move the lungs and gets the air in and out of the chest . . . . Often we recommend psychotherapy because a lot of people with COPD are depressed. More research is needed to better understand the conditions mechanisms, and to develop new treatments that reduce disease activity and progression,” says Mohan, see videos below.
 
What are the treatments for COPD?
 
 COPD market and changing treatment landscape
 
Given the vast and escalating global prevalence of COPD, the market for therapies is also huge, global, and rapidly growing, and giant pharmaceutical companies aggressively compete for market share. The current size of the COPD market is estimated to be US$17bn. The overall respiratory therapeutics market, which in addition to COPD, includes, asthma, idiopathic pulmonary fibrosis (IPF), and cystic fibrosis, is about US$30bn and projected to grow to US$47bn by 2022. Currently, there are some 900 drugs in development for all types of respiratory disorders. The sheer size and rate of growth of this market, plus the fact that there is still no definitive treatment for COPD, motivates pharmaceutical companies to commit millions to its research. Notwithstanding, the overwhelming majority of current research data are derived from a relatively narrow band of developed nations.
 
COPD research

Influence of cigarette smoking on COPD research
For many years COPD research concentrated on the condition’s association with cigarette smoking. This led to the early discovery that a subgroup of patients with emphysema was genetically deficient in an inhibitor of an enzyme that breaks down proteins and peptides. Although this explanation captures key elements of COPD, it has neither led to a reduction in its prevalence or morbidity, nor to the development of any therapy proven to modify the disease process itself, or to an adequate understanding of how risk factors other than cigarette smoking may contribute to COPD pathogenesis.
 
Biologics
Although research has improved and our understanding of COPD has advanced, there remain challenges for researchers. Contributing to these is a broader array of mechanisms implicated in COPD’s pathogenesis compared to many other respiratory disorders. Notwithstanding, there has been a determined focus on a range of targeted biologic agents as potential therapies for the condition, which has led to an improved understanding of the pathophysiology and clinical manifestations of COPD; and the increased awareness of the importance of inflammation.
 
Although, innovative sampling techniques have led to the identification of several pulmonary biomarkers, (measurable substances that signal the presence of disease in the blood or tissues), which potentially could provide an enhanced insight into the pathophysiological mechanisms of exacerbation, sampling methods still could be improved because the utility of current methods is not yet established, and they have yet to provide compelling data in support of their use in COPD. This suggests a need for more research directed toward identifying the bases of COPD exacerbations, and clarifying the pathophysiological processes that contribute to worsening of symptoms. Other research studies focus on the underlying genetics of COPD in order to find better ways of identifying which smokers are more likely to develop COPD.
 
Challenge of COPD data
 
When recruiting patients for COPD studies, it is impossible to determine the speed at which the lung function will deteriorate in any given individual. This raises methodological challenges particularly with regard the size and nature of a cohort at the beginning and end of a study. Further, longitudinal studies require regular, and systematic collection of patient data, which may be a combination of self-reporting, electronic patient records (EPR), and results of tests undertaken by health professionals. Collecting longitudinal patients’ perceptions of the status of their COPD from a dispersed patient cohort is challenging because of different distributions of the disease, and the variation in the availability and quality of significant events, such as exacerbations.
 
Self-management

More recently, apps have been developed to encourage the self-management of COPD, but they are also potentially helpful for research. This is because apps are able to unobtrusively enter the daily lives of people with COPD. However, the utility of apps as research aids is limited because rarely are they configured to aggregate, export and share the data they collect. However, this is changing.

The large and rapid growth of the health-related apps market, and the impact it has on shaping the attitudes and expectations of millions of people about healthcare, suggests that the utility of such devices to support clinical research will increase. Helpful in this regard is the fact that apps are being configured to enable rapid remote tests, and collect, transmit, store and analyse data.
 
Data validity and patient compliance
Notwithstanding, two significant challenges associated with apps remain for COPD researchers. One concerns the technological adequacy of apps to consistently produce valid data, and another is the compliance of patients in COPD studies. Both of these concerns however are being addressed.
 
Validation
A study, published in 2017 in the journal Nature Biotechnology, provides some validation for data derived from apps to be used in clinical studies. Scientists developed an app to collect survey data from 7,600 asthma sufferers over a 6-month period on how they managed their condition. Researchers then compared these app-generated patient-reported data with similar data from traditional asthma studies and found that there were no significant differences. Although there still remains some methodological challenges associated with using apps to recruit patients for clinical studies, findings from this and other studies give scientists some degree of confidence that app-derived data can be reliable enough for clinical studies.
 
Giant tech companies and medical research
The increasing validation of app-generated health data is driving the growth in pairing wireless health apps with data monitoring, and creating an opportunity for giant global technology companies to enter the healthcare market by joint venturing with big pharmaceutical companies. Such ventures create big-data opportunities to aggregate vast amounts of patient data from millions of COPD sufferers and the efficacy of specific drugs. Such ventures also allow patients remotely to keep track of their drug usage, and for health professionals to instantly access the data to monitor an individual patient’s condition.
 
Compliance
There is some evidence to suggest that people with COPD are less compliant recording information about their condition when they are experiencing an exacerbation, or just not feeling well. A solution might be to employ techniques, which “nudges” patients to be more compliant. The genesis of nudge systems is a 2008 publication, Nudge, by US academics Cass Sunstein and Richard Thaler. The authors suggest that making small changes to the way options are presented to individuals “nudges” them to engage in behaviours that they would not normally do. Following the publication of the book, “nudge units” were set up in the White House and in 10 Downing Street to encourage people to change entrenched behaviours in order to improve occasional and unsystematic public services, while reducing costs.

The UK’s Nudge Unit has, among other things, significantly increased the rate of organ donation, and encouraged a substantial number of individuals to initiate and maintain healthier lifestyles. Minded of the successes, governments throughout the world have set up nudge units. A 2017 OEDC report suggests that nudge units have entered the mainstream,  and could be used much more widely. Also in 2017, Richard Thaler was awarded the Nobel Prize for his contribution to behavioural economics. COPD researchers might consider replacing the current “pull” techniques with nudge techniques to enhance patient compliance in COPD clinical studies.
 
Takeaways

For years COPD research was in the doldrums, but over the past decade things have changed significantly. Notwithstanding, COPD studies could benefit from more compelling data, and this could be achieved by employing smart online communications strategies that increase the connectivity of globally dispersed COPD researchers and individuals living with the condition with an eye to enhance patient compliance in COPD studies, increase the quality of research data, and expand the geographies from which COPD data are retrieved.
view in full page
  • A 2017 research project found that only 6 out of 18 FDA-approved blood glucose monitoring (BGM) systems tested were accurate
  • Each day BGM systems are used by millions of people with diabetes to help them self-manage their condition, and avoid devastating and costly complications
  • Thousands of similar smart devices support the prevention and self management of other chronic lifetime conditions, whose prevalence levels are high
  • The increasing demand for healthcare, its escalating costs, and rapidly evolving technologies are driving the growth of such remote self-managed devices
  • The most valuable aspect of such devices is the data they produce
  • These data tend to be under valued and under utilized by healthcare providers
  • This has created an opportunity for giant technology companies to enter the healthcare market with a plethora of smart devices and start utilizing the data they collect to enhance patient outcomes and lower costs
  • Giant technology companies could dis-intermediate GPs and re-engineer primary care
 

Digital blood glucose monitors and the disruptive impact of giant tech companies on healthcare


A 2017 research project, which tested 18 FDA-approved digital blood glucose monitoring (BGM) systems, which are used daily by millions of people with diabetes to check the concentration of glucose in their blood, found that only 6 were accurate. The research, led by David Klonoff of the Diabetes Research Institute at San Mateo, California, was funded by Abbott Laboratories.
 
This Commentary describes both traditional and next-generation BGM systems, and Klonoff’s research. The Commentary suggests that BGM systems are just one part of a vast, global, rapidly growing market for consumer healthcare devices, and argues that the most valuable aspect of these devices is the data they collect. With some notable exceptions, healthcare professionals do not optimally utilize these data to enhance care and reduce costs. This has created for an opportunity for technology companies to enter the healthcare market and re-engineer primary care. The one thing, which might slow the march of giant technology companies into mainstream healthcare, is the privacy issue.
 

Traditional and next-generation BGM systems
 
Traditional BGM systems
Regularly, each day, BGM systems are used by millions of people with diabetes to help them manage their condition. Managing diabetes varies from individual to individual, and peoples with diabetes usually self-monitor their blood glucose concentration from a small drop of capillary blood taken from a finger prick. They then apply the blood to a chemically active disposable 'test-strip'. Different manufacturers use different technology, but most systems measure an electrical characteristic, and use this to determine the glucose level in the blood. Such monitoring is the most common way for a person with diabetes to understand how different foods, medications, and activities affect their condition. The challenge for individuals with diabetes is that blood glucose levels have to be tested up to 12 times a day. People obliged to do this find finger pricking painful, inconvenient and intrusive, and, as a consequence, many people with diabetes do not check their glucose levels as frequently as they should, and this can have significant health implications. If your levels drop too low, you face the threat of hypoglycemia, which can cause confusion or disorientation, and in its most severe forms, loss of consciousness, coma or even death. Conversely, if your blood glucose levels are too high over a long period, you risk heart disease, blindness, renal failure and lower limb amputation.
 

Next generation BGM system
Abbott Laboratories Inc. markets a BGM system, which eliminates the need for routine finger pricks that are necessary when using traditional glucose monitors. Instead of finger pricks and strips, the BGM system, which measures interstitial fluid glucose levels, comprises a small sensor and a reader. An optional companion app for Android mobile devices is also available. The sensor is a few centimetres in diameter and is designed to stay in place for 10 days. It is applied to the skin, usually on the upper arm. A thin (0.4 mm), flexible and sterile fibre within the sensor is inserted in the skin to a depth of 5 mm. The fibre draws interstitial fluid from the muscle into the sensor, where glucose levels are automatically measured every minute and stored at 15-minute intervals for 8 hours. Glucose levels can be seen at any time by scanning the reader over the sensor. When scanned the sensor provides an answer immediately. It also shows an 8-hour history of your blood glucose levels, and a trend arrow showing the direction your glucose is heading. The device avoids the pain, and inconvenience caused by finger-prick sampling, which can deter people with diabetes from taking regular measurements. In the UK the system costs £58 for the reader, plus £58 for a disposable sensor, which must be replaced every 10 days and from November 2017 have been available on the NHSAbbott Laboratories is a global NASDAQ traded US MedTech Company, with a market cap of US$86bn; annual revenues of US$21bn, and a diabetes care division, which produces annual revenues of some US$600m.
 
Klonoff’s research on BGM systems

BGM systems used by Klonoff and his team for their research were acquired over-the-counter and independent of their manufacturers. All were tested according to a protocol developed by a panel of experts in BGM surveillance testing.
 
Klonoff’s research specified that for a BGM system to be compliant, a blood glucose value must be within 15% of a reference plasma value for a blood glucose >100 mg/dl, and within 15 mg/dl of a reference plasma value for a blood glucose approved” a BGM system had to pass all 3 trials.  Only 6 out of 18 passed by achieving an overall compliance rate of 95% or higher. 

 

The FDA
Klonoff’s findings add credibility to patients’ concerns about the accuracy of BGM systems, which triggered responses from both manufactures and the US Food and Drug Administration  (FDA). Manufacturers suggest that increasing the accuracy of BGM systems would raise their costs, and reduce their availability, which patients do not want. The FDA tightened approvals for BGM systems, and in 2016 issued 2 sets of guidelines, one for clinical settings, and another for personal home-use. The guidelines only apply to new products, and do not impact BGM systems already on the market. So while the FDA’s tighter accuracy requirements are a positive change, there are a significant number of less-accurate BGM systems still on the market. 
You might also be interested in:


The convergence of MedTech and pharma and the role of biosensors

 

Next-generation BGM systems
Next generation BGM systems use ‘sensing’ technology, and have the capacity to automatically track and send blood glucose readings to the user’s smartphone, then onto their healthcare provider through the cloud where they can be amalgamated with other data. Analytics can then track an individual’s data, and compare them to larger aggregated data sets to detect trends, and provide personalized care.

Large rapidly growing remote self-managed device market

Although BGM systems address a vast global market, they represent just one part of a much larger, rapidly growing, remote monitoring market to help prevent and self-manage all chronic lifetime conditions, while improving healthcare utilization, and reducing costs. In 2015 some 165,000 healthcare apps were downloaded more than 3bn times. Of these, 44% were medical apps, and 12% were apps for managing chronic lifetime conditions. Today, mobile devices enable people to use their smartphones to inspect their eardrums, detect sleep apnoea, test haemoglobin, vital signs such as blood pressure, and oxygen concentration in the blood. This is a significant advance from the early precursor of activity tracker and step counting.

Chronic lifetime conditions
21st century healthcare in developed countries is predominantly about managing chronic lifetime illnesses such as diabetes, cancer, heart disease and respiratory conditions. These 4 diseases have high prevalence levels, relatively poor outcomes, and account for the overwhelming proportion of healthcare costs. For instance, in the US alone, almost 50% of adults (117m) suffer from a chronic lifetime condition, and 25% have multiple chronic conditions. 86% of America’s $2.7 trillion annual health care expenditures are for people with chronic health conditions. This chronic disease pattern is replicated throughout the developed world, and has significant healthcare utilization and cost implications for public and private payers, individuals, and families.
 
Healthcare providers tend not to optimally utilize data

Although personal remote devices are increasingly important in the management of chronic conditions, the data these devices create are underutilized, despite their potential for improving outcomes and reducing costs. This is partly because doctors and health providers neither have the capacity nor the resources to exploit the full potential of these data; partly because doctors tend to resist technology to improve doctor-patient interactions, and partly because remote healthcare devices have not been validated for clinical use. 

Validation
Although health professionals tend to prefer to use more expensive medical grade devices, which ensure data validity, but often drive up costs, research validating the data collected by remote self-managed devices for clinical use is beginning to emerge. In 2016 Analog Devices, a US multinational semiconductor company specializing in data conversion and signal processing technology, and LifeQa private US company with advanced bio-mathematical capabilities, announced a joint venture to establish whether data from wearable’s are accurate enough for clinical use.
 
A study published in 2017 in the journal Nature Biotechnologyprovides some validation for data derived from apps to be used clinically. Using ResearchKit, an open source framework introduced by Apple in 2015 that allows researchers and developers to create powerful apps for medical research, the 6-month study enrolled 7,600 smartphone users who completed surveys on how they used an app to manage their asthma. Researchers then compared these patient-reported data with similar data from traditional asthma research, and found that there were no significant differences. Although there still remains some methodological challenges, the findings gave scientists confidence that data derived from an app could be reliable enough for clinical research. If data from self-managed remote monitoring devices are validated, then such devices could be used to unobtrusively and cost effectively enter the daily lives of patients to collect meaningful healthcare patient data, which could be used to enhance outcomes. Early research adopters of ResearchKit include the University of Oxford, Stanford Medicine, and the Dana-Farber Cancer Institute.

 
Giant technology companies entering healthcare market
 
The increasing validation of data generated by mobile devices and the continued underutilization of such data by health providers has created an opportunity for giant global technology companies to enter the healthcare market by: (i) developing and marketing self-monitoring devices directly to consumers, (ii) collecting, integrating, storing and analysing data generated by these remote devices, and (iii) supporting research initiatives to validate data from remote devices for clinical use.
 

Apple Inc.
Just one example of giant technology companies entering the healthcare market is Apple Inc., which has a market cap of about US$1tn and 700m users worldwide. In 2017, Apple announced that it has been testing a BGM system, which pairs with the company’s existing Watch wearable. In August 2017, the US Patent and Trademark Office officially published a series of 50 newly granted patents to Apple. One covers an invention relating to health data, and more specifically to a smartphone that computes health data. 
 
The technology involves emitting light onto a user’s body part and measuring the amount of light reflected back. This data can then help to determine body fat, breathing and even emotional health. This, and other patents issued to Apple fuel rumors that the company is preparing to turn its flagship smartphone into a predominantly healthcare-focused device.

 
Takeaway
 
Given the size and momentum of technology giants entering the healthcare market, and given the powerful demographic, technological, social and economic drivers of this market, it seems reasonable to assume that in the medium term, giant technology companies are well positioned to dis-intermediate primary care doctors, and re-engineer primary care. One thing that could slow this march, is the question of privacy. Health records are as private as private gets - from alcohol or drug abuse to sexually transmitted diseases or details of abortions: things we may never want to reveal to employers, friends or even family members. Significantly, these data are permanent, and privacy at this point is non-negotiable.
view in full page
  • 'Drunkorexia' is a growing and dangerous trend among young people to eat less, purge or exercise excessively before binge drinking
  • Purging prior to drinking includes vomiting, laxatives or self-starvation
  • The intention is to save calories for binge-drinking
  • 41% of 18 to 24 year olds in a 2016 survey of 3,000 say they are not concerned about their overall health
  • Health providers are wasting millions on traditional healthcare education
  • Experts say we need to rethink how to encourage people to assume greater personal responsibility and accountability for their health
  • Healthcare providers have failed to leverage ubiquitous technologies and people’s changed lifestyles to engage and educate patients
  • To reduce the burden of drunkorexia healthcare providers will need to gain a better understanding of patients’ behaviors and ubiquitous 21st century technologies

Drunkorexia: a devastating and costly growing condition
 
Drunkorexia is using extreme weight control methods as a means to compensate for planned binge drinking. The French refer to it as alcoolorexie: l'ivresse sans les kilos. Manger moins pour être ivre plus vite et ne pas trop grossir. Drunkorexia is a term coined by the media to describe the combination of disordered eating and heavy alcohol consumption. The condition is gaining recognition in the fields of co-occurring disorders (people who have both substance use and mental health disorders), psychiatry, and addictionology. The term attempts to reconcile 2 conflicting cultures: binge drinking and a desire to be thin. The former involves ingesting significant amounts of unwanted extra calories, so people starve themselves in preparation for a night out drinking. Drunkorexia results in significant human costs from hypoglycaemia, depression, memory loss, and liver disease, and substantial and unnecessary costs to healthcare providers.
 
Experts argue that traditional methods to lower the burden of drunkorexia cost millions and are failing, and suggest there is an urgent need to, “rethink how we try and engage with people and try and encourage them to assume greater personal responsibility and accountability for their health.” This Commentary describes drunkorexia, reports some research findings on the condition, and suggests health providers would lower the large and growing burden of drunkorexia by leveraging ubiquitous technologies such as the Internet and smartphones.
 
Not an officially medical diagnosis

Drunkorexia is not an officially recognized medical condition. There is no mention of it in Mediline Plus, the US National Institutes of Health's online medical information service produced by the National Library of Medicine. It is not mentioned in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which is published by the American Psychiatric Association, and popularly known as  “The Psychiatrist’s Bible”. Neither is the condition included in the World Health Organization's International Classification of Disease; nor in WebMD, the UK’s NHS online, NHS Choices, and the UK’s General Medical Council’s (GMC) website.
Signs and symptoms
 
Signs and symptoms include calorie counting to ensure no weight is gained when binge drinking, missing meals to conserve calories so that they can be spent on the consumption of alcohol, over-exercising to counterbalance calorie intake, and binge drinking to vomit previously digested food.

A dangerous condition

Despite evidence to suggest that more people are turning away from alcohol and becoming teetotallers, the prevalence of drunkorexia is increasing.

You might also be interested in:

Orthorexia: when eating healthily becomes unhealthy

It is a dangerous trend, especially among young people, which can lead to an array of physical and psychological consequences. For example, drinking in a state of malnutrition can predispose you to a higher rate of blackouts, alcohol poisoning, alcohol-related injury, violence, or illness. Drinking on an empty stomach allows ethanol to reach the blood system more rapidly, and raises your blood alcohol content often with dangerous speed. This can render you more vulnerable to alcohol-related brain damage. In addition, alcohol abuse can have a detrimental impact on hydration and your body's retention of minerals and nutrients, further exacerbating the consequences of malnutrition, and damaging your cognitive faculties. This can lead to short and long-term cognitive problems, including difficulty concentrating and making decisions, which ultimately can have a negative impact on academic and work-related performance. Drunkorexia also increases the risk of developing more serious eating disorders and alcohol abuse problems. As binge drinking is involved there is also a greater risk of violence, of risky sexual behavior, alcohol poisoning, substance abuse and chronic disease later in life.
 
Research

Although much of the research on drunkorexia is focused on university students, the condition is believed to be more widely spread. A challenge for researchers is the attitudes of university administrators and parents who are reluctant to admit that there is a problem either in their institutions or homes. The condition is often dismissed as a rite-of-passage. Notwithstanding, there have been a number of research studies, which suggest that drunkorexia is significant, growing fast and dangerous.
 
University of Missouri study

A 2011 University of Missouri study of the relationship between alcohol misuse and disordered eating, including calorie restriction and purging, suggests that drunkorexia is predominately a young women’s condition, which could affect their long-term health. The study found that 16% of respondents reported restricting calories to "save them" for drinking. 67% of students who restrict calories prior to binge drinking did so to prevent weight gain, while 21% did so to facilitate alcohol intoxication. 3 times as many women reported engaging in the behavior than men, and their stated motivations included “preventing weight gain”, “getting intoxicated faster” and “saving money”, which could be either spent on food or to buy alcohol. According to Victoria Osborne, Professor of Social Work and Public Health at the university, and lead author of the study, drunkorexia can have dangerous cognitive, behavioural and physical consequences. It also puts people at risk for developing more serious eating disorders or addiction problems.
 
Australian study

In an Australian context, a 2013 study surveyed 139 female university students, aged between 18 and 29 to examine compensatory eating and behaviors in response to alcohol consumption to test for drunkorexia symptomatology. 79% of respondents engaged in characterized drunkorexia behavior. The study also found that social norms of drinking, and the social norms associated with body image and thinness, impacted significantly upon the motivation for these behaviors.
 
University of Houston study

Findings of a University of Houston study on drunkorexia presented at the 2016 annual meeting of the Research Society on Alcoholism in New Orleans, found that 80% of the 1,200 students surveyed had at least one heavy night of drinking in the previous month, and engaged in drunkorexic behavior. The methods of purging prior to drinking include vomiting, use of laxatives or missing meals. The study also reported that the condition is not limited to the US, and is present in both men and women.
 
Benenden’s National Health study
 
Healthcare group Benenden’s 2016 National Health Report suggests that drunkorexia is gaining ground among young people in the UK, and creating concerns among healthcare professionals. According to the study, young people in the UK prefer to eat less in order to “save” calories for alcohol consumption. Of the 3,000 people surveyed, 2 out of 5 (41%), between the ages of 18 and 24 said they eat healthily only to look good, but are not concerned about their overall health. According to the report, “Pressure to be slim, an awareness of exercising calorie control, and peer pressure to drink large amounts of alcohol are all factors in this phenomenon”, adding that a growing number of men are following this trend.

Survey participants were also asked general questions about healthy lifestyles. “By and large, the findings highlight that the public is in denial about how much they think they know about healthy eating, they claim to be near-experts, but when drilling down to real-life examples, the vast majority of respondents failed to choose the right answer to simple diet-related questions, or the healthier option when offered the choice between everyday food and drinks,” the report found.
 
There also seems to be a woeful lack of awareness about basic dietary advice, despite legislation and attempts by the food production and manufacturing industry. It isn't clear whether this is down to poor education or a lack of interest, but I think we need to rethink how we try and engage with people and try and encourage them to assume greater personal responsibility and accountability for their health," says Dr John Giles, Benenden’s medical director.

Traditional healthcare providers failing

Traditional healthcare providers continue to waste billions on failing traditional methods of engaging and educating patients. Increasing self-management of your health is relevant, especially as primary care resources are shrinking as the prevalence of drunkorexia is rapidly increasing. However, achieving effective education and self-management requires a fundamental transformation of the way healthcare is delivered. The majority of people living with drunkorexia regularly use their smartphones for 24-hour banking, education, entertainment, shopping, and dating. Health providers have failed to effectively leverage this vast and rapidly growing free infrastructure and people’s changed lifestyles to introduce effective educational support systems to enhance the quality of drunkorexia care, increase efficiency, and improve patient outcomes. Today, mobile technology is part of everyday life and people expect to be connected with their relevant healthcare providers 24-7, 365 days of the year from anywhere. 

Takeaways

A necessary pre-requisite for effective healthcare education to reduce the burden of drunkorexia is the actual engagement of people with the condition. Once patients are engaged, education should inform and empower people, and provide them with access to continuous self-management support. This is substantially different to the way traditional healthcare education is delivered as it transforms the patient–educator relationship into a continuous, rich, collaborative partnership. 
view in full page
  • Many people still view China as a ‘copycat’ economy, but this is rapidly changing
  • China is:
    • Pursuing a multi-billion dollar-15 year strategy to become a world leader in genomic engineering and personalized medicine
    • Systematically upgrading and incentivizing its large and growing pool of scientists who are making important breakthroughs in the life sciences
    • Empowering and encouraging state owned and private life science companies to own and control the capacity to transform genomic, clinical and personal data into personalized medicines
  • The difference in national approaches to individualism and privacy confers an added competitive advantage to China and its life science ambitions
  • China’s approach to individualism and privacy issues could have implications for society


The global competition to translate genomic data into personal medical therapies

 

PART 2
 
China is no longer a low cost ‘copycat’ economy. Indeed, it has bold plans to become a preeminent global force in genomic engineering to prevent and manage devastating and costly diseases. Here we briefly describe aspects of China’s multibillion-dollar, government-backed initiative, to own and control significant capacity to transform genomic data into precision medicines. This is not only a ‘numbers’ game. China’s drive to achieve its life science ambitions is also advantaged by a different approach to ‘individualism’ and privacy compared to that of the US; and this could have far-reaching implications for future civilizations.

Uneven playing field
Genomic engineering and precision medicine have the potential to revolutionize how we prevent and treat intractable diseases. Who owns the intellectual property associated with genomic engineering, and who first exploits it, will reap significant commercial benefits in the future. However, genomic technologies are not like any other. This is because genetically modifying human genomes could trigger genetic changes across future generations. Misuse of such technologies therefore could result in serious harm for individuals and their families. On the other hand, over regulation of genomic engineering could slow or even derail the prevention and treatment of devastating and costly diseases. Establishing a balance, which supports measures to mitigate misuse of genomic technologies while allowing the advancement of precision medicine is critical. However, this has proven difficult to establish internationally.

Chinese scientists have crossed an ethical line
Chinese culture interprets individualism and privacy differently to American culture, and therefore China responds differently to certain ethical standards compared to the US and some other Western nations. Indeed, national differences were ignited in 2012 when Chinese researchers published their findings of the world’s first endeavors to modify the genomes of human embryos to confer genetic resistance to certain diseases. Because such modifications are heritable critics argued that the Chinese scientists crossed a significant ethical line, and this was the start of a “slippery slope”, which could eventually lead to the creation of a two-tiered society, with elite citizens genetically engineered to be smarter, healthier and to live longer, and an underclass of biologically run-of-the-mill human beings.

International code of conduct called for but not adhered to
2 prominent scientific journals, Nature and Science, rejected the Chinese research papers reporting world-first scientific breakthroughs on ethical grounds. Subsequently, Nature published a note calling for a global moratorium on the genetic modification of human embryos, suggesting that there are “grave concerns” about the ethics and safety of the technology. 40 countries have banned genetically modifying human embryos. In 2016, a report from the UK’s Nuffield Council on Bioethics stressed the importance of an internationally agreed ethical code of conduct before genomic engineering develops further.
 
In 2017 an influential US science advisory group formed by the National Academy of Sciences and the National Academy of Medicine gave ‘lukewarm’ support to the modification of human embryos to prevent, “serious diseases and disabilities” in cases only where there are no other “reasonable alternatives”. The French oppose genomic modification, the Dutch and the Swedes support it, and a recent Nature editorial suggested that the EU is, “habitually paralyzed whenever genetic modification is discussed”. In the meantime, clinical studies, which involve genomic engineering, are advancing at a pace in China.

With regard to genome testing, western human rights activists have warned that China is targeting vulnerable groups and minorities to help build vast genomic databases without appropriate protection for individuals. Those include migrant workers, political dissidents and ethnic or religious minorities such as the Muslim Uighurs in China's far western Xinjiang region. Xinjiang authorities are reported to have invested some US$10bn in advanced sequencing equipment to enhance the collection and indexing of these data.


Different national interpretations of ‘individualism’
Individualism’, which is at the core of ethical considerations of genomic engineering, is challenging to define because of its different cultural, political and social interpretations. For example, following the French Revolution, individualisme was used pejoratively in France to signify the sources of social dissolution and anarchy, and the elevation of individual interests above those of the collective. The contemporary Chinese interpretation of individualism is similar to the early 19th century French interpretation. It does not stress a person’s uniqueness and separation from the State, but emphasizes an individual’s social; contract and harmony with the State. By contrast, American individualism is perceived as an inalienable natural right of all citizens, and independent of the State.

Further, American individuals are actively encouraged to challenge and influence the government and its regulatory bodies, whereas in China citizens are expected to unquestionably support the State. China is a one party state, where individuals generally accept that their government and its leaders represent their higher interests, and most citizens therefore accept the fact that they are not expected to challenge and influence policies determined by the State and its leaders. This difference provides China with a significant competitive advantage in its endeavors to become a world leader in the life sciences,

 
Human capital

By 2025, some 2bn human genomes could be sequenced. This not only presents ethical challenges, but also significant human capital challenges. The development of personalized medicines is predicated upon the ability to aggregate and process vast amounts of individual genomic, physiological, health, environmental and lifestyle data. This requires next generation sequencing technologies, smart AI systems, and advanced data managers of which there is a global shortage. Thus, the cultivation and recruitment of appropriate human capital is central to competing within the rapidly evolving international genomic engineering marketplace. The fact that China has a more efficacious strategy to achieve this than the US and other Western democracies provides it with another significant competitive advantage.

STEM graduates
Since the turn of the century, China has been engaged in a silent revolution to substantially increase its pool of graduates in science, technology, engineering and mathematics (STEM), while the pool of such graduates in the US and other Western democracies has been shrinking. In 2016, China was building the equivalent of almost one university a week, which has resulted in a significant shift in the world's population of STEM graduates. According to the World Economic Forumin 2016, the number of people graduating in China and India were respectively 4.7m and 2.6m, while in the US only 568,000 graduated. In 2013, 40% of all Chinese graduates finished a degree in STEM, over twice the share of that in US universities. In 2016, India had the most graduates of any country worldwide with 78m, China followed closely with 77.7m, and the US came third with 67m graduates.

University education thriving in China and struggling in the West
In addition to China being ahead of both the US and Europe in producing STEM graduates; the gap behind the top 2 countries and the US is widening. Projections suggest that by 2030 the number of 25 to 34-year-old graduates in China will increase by a further 300%, compared with an expected rise of around 30% in the US and Europe. In the US students have been struggling to afford university fees, and most European countries have put a brake on expanding their universities by either not making public investments or restricting universities to raise money themselves.
 

The increasing impact of Chinese life sciences
China's rapid expansion in STEM graduates suggests that the future might be different to the past. Today, China has more graduate researchers than any other country, and it is rapidly catching up with the US in the number of scientific papers published. The first published papers to describe genetic modifications of human embryos came from Chinese scientists

Further, according to the World Intellectual Property Organization, domestic patent applications inside China have soared from zero at the start of the 21st century to some 928,000 in 2014: 40% more than the US’s 579,000, and almost 3 times that of Japan’s 326,000.
 

China’s strategy to reverse the brain drain
Complementing China’s prioritization of domestic STEM education is its “Qianren Jihua” (Thousand Talents) strategy. This, established in the wake of the 2008 global financial crisis to reverse China’s brain drain, trawls the world to seek and attract highly skilled human capital to China by offering them incentives. Qianren Jihua’s objective is to encourage STEM qualified Chinese ex patriots to return to China, and encourage those who already reside in China to stay, and together help create an internationally competitive university sector by increasing the production of world-class research to support China’s plans to dominate precision medicine and life sciences.
 
Government commitment

In 2016, China announced plans for a multi-billion dollar project to enhance its competitiveness by becoming a global leader in molecular science and genomics. China is committed to supporting at least three principal institutions, including the Beijing Genomics Institute (BGI), to sequence the genomes of many millions.
 
In addition to investments at home, China also is investing in centers similar to that of BGI abroad. Over the past 2 years China has invested more than US$110bn on technology M&A deals, which it justifies by suggesting that emerging technologies are, “the main battlefields of the economy”. Early in 2017 BGI announced the launch of a US Innovation Center, co-located in Seattle and San Jose. The Seattle organization is focused on precision medicine and includes collaborations with the University of Washington, the Allen Institute for Brain Science, and the Bill and Melinda Gates Foundation. The San Jose facility, where BGI already has a laboratory employing over 100, supports its ambitions to develop next-generation sequencing technologies, which until now have been dominated by the US sequencing company Illumina.


Changing structure of China’s economy
Some suggest that China’s rise on the world life sciences stage will be short lived because the nation is in the midst of a challenging transition to a slower-growing, consumption-driven economy, and therefore will not be able to sustain such levels of investment; and this will dent its ambition to become a global player in genomic science. An alternative argument suggests slower growth forces China to act smarter, and this is what drives its precision medicine ambitions.

Between 1985 and 2015, China’s annual GDP rose, on average, by 9.4%. Fuelling this growth was a steady supply of workers entering the labour force and massive government led infrastructure investments. Now, because of China’s ageing population, its labour capacity has peaked and started to decline. Without labour force expansion, and investment constrained by debt, China is obliged to rely more heavily on innovation to improve its productivity. And this drives, rather than slows, China’s strategy to become a world leader in genomic technologies and personalized medicine.
 

China’s economic growth is slowing, but its production of scientific research is growing
Although China’s economy is slowing, it is still comparatively large. In 2000, China spent as much on R&D as France; now it invests more in genomics than the EU, when adjusted for the purchasing power of its currency. Today, China produces more research articles than any other nation, apart from the US, and its authors’ feature on around 20% of the world’s most-cited peer reviewed papers. Top Chinese scientific institutions are breaking into lists of the world’s best, and the nation has created some unparalleled research facilities. Even now, every 16 weeks China produces a Greece-size economy, and doubles the entire size of its economy every 7 years. Today, China has an economy similar in size to that of the US, and most projections suggest that, over the next 2 decades, China’s economy will dwarf that of the US.
 
Takeaways

China is cloning its successful strategy to own and control significant mineral and mining rights to the life sciences. Over the past 20 years China has actively pursued mining deals in different global geographies, and now controls significant mining rights and mineral assets in Africa and a few other countries. This allows China to affect the aggregate supply and world market prices of certain natural resources. Now, China is cloning this commercially successful strategy to the life sciences, and has empowered and encouraged a number of state owned and private companies to own and control genomic engineering and precision medicine. China’s single-minded determination to become a world leader in life sciences, and its interpretation of individualism and privacy issues could have far reaching implications for the future of humanity.
view in full page

 

 
  • In 2003 the US first discovered the genome and became the preeminent nation in genomics
  • This could change
  • World power and influence have moved East
  • China has invested heavily in genomic technologies and established itself as a significant competitive force in precision medicine
  • Ownership of intellectual property and knowhow is key to driving national wealth 
 

The global competition to translate genomic data into personal medical therapies

 

PART 1

Professor Dame Sally Davies, England’s Chief Medical Officer, is right. (Genomics) “has the potential to change medicine forever. . . . The age of precision medicine is now, and the NHS must act fast to keep its place at the forefront of global science.”
 
It is doubtful whether the UK will be able to maintain its place as a global frontrunner in genomics and personalized medicine. It is even doubtful whether the US, the first nation to discover the genome, and which became preeminent in genomic research, will be able to maintain its position. China, with its well-funded strategy to become the world’s leader in genomics and targeted therapies, is likely to usurp the UK and the US in the next decade.
 
This Commentary is in 2 parts. Part 1 provides a brief description of the global scientific competition between nation states to turn genomic data into medical benefits. China’s rise, which is described, could have significant implications for the future ownership of medical innovations, data protection, and bio-security. Part 2, which follows in 2 weeks, describes some of the ethical, privacy, human capital and economic challenges associated with transforming genomic data into effective personal therapies.
  
Turning genomic data into medical benefits
 
Turning genomic data into medical benefits is very demanding. It requires a committed government willing and able to spend billions, a deep understanding of the relationship between genes and physiological traits, next generation sequencing technologies, artificial intelligence (AI) systems to identify patterns in petabytes (1 petabyte is equivalent to 1m gigabytes) of complex data, world-class bio-informaticians, who are in short supply; comprehensive and sophisticated bio depositories, a living bio bank, a secure data center, digitization synthesis and editing platforms, and petabytes of both genomic, clinical, and personal data. Before describing how the UK, US and China are endeavoring to transform genomic data into personal medicine, let us refresh our understanding of genomics.

  
Genomics, the Human Genomic Project and epigenetics
 
It is widely understood that your genes are responsible for passing specific features or diseases from one generation to the next via DNA, and genetics is the study of the way this is done. However, it is less widely known that your genes are influenced by environmental and other factors. Scientists have demonstrated that inherited genes are not static, and lifestyles and environmental factors can precipitate a chemical reaction within your body that could permanently alter the way your genes react. This environmentally triggered gene expression, or epigenetic imprint, can be bad, such as a disease; or good, such as a tolerant predisposition. Epigenetics is still developing as an area of research, but it has demonstrated that preventing and managing disease is as much to do with lifestyles and the environment, as it is to do with inherited genes and drugs. If environmental exposure can trigger a chemical change in your genes that results in the onset of disease, then scientists might be able to pharmacologically manipulate the same mechanisms in order to reverse the disease.
 
DNA is constantly subject to mutations, which can lead to missing or malformed proteins, and that can lead to disease. You all start your lives with some mutations, which are inherited from your parents, and are called germ-line mutations. However, you can also acquire mutations during your lifetime. Some happen during cell division, when DNA gets duplicated, other mutations are caused when environmental factors including, UV radiation, chemicals, and viruses damage DNA.

You have a complete set of genes in almost every healthy cell in your body. One set of all these genes, (plus the DNA between them), is called a genome. The genome is the collection of 20,000 genes, including 3.2bn letters of DNA, which make up an individual. We all share about 99.8% of the genome. The secrets of your individuality, and also of the diseases you are prone to, lie in the other 0.2%, which is about 3 or 4m letters of DNA. The genome is known as ‘the blueprint’ of life’, and genomics is the study of the whole genome, and how it works. Whole genome sequencing (WGS) is the process of determining the complete DNA sequence of an organism's genome at a point in time.
 
‘The Human Genome Project’ officially began in 1990 as an international research effort to determine a complete and accurate sequence of the 3bn DNA base pairs, which make up the human genome, and to find all of the estimated 20 to 25,000 human genes. The project was completed in April 2003. This first sequencing of the human genome took 13 years and cost some US$3bn. Today, it takes a couple of days to sequence a genome, and costs range from US$260 for targeted sequencing to some US$4,000 for WGS. Despite the rapidly improving capacity to read, sequence and edit the information contained in the human genome, we still do not understand most of the genome’s functions and how they impact our physiology and health.

 
Roger Kornberg explains the importance of genomics
 
Roger Kornberg, Professor of Structural Biology at Stanford University, and 2006 Nobel Laureate for Chemistry, explains the significance of sequencing the human genome, “The determination of the human genome sequence and the associated activity called genomics; and the purposes for which they may be put for medical uses, takes several forms. The knowledge of the sequence enables us to identify every component of the body responsible for all of the processes of life. In particular, to identify any component that is either defective or whose activity we may adjust to address a problem or a condition. So the human genome sequence makes available to us the entire array of potential targets for drug development. . . . . The second way in which the sequence and the associated science of genomics play an important role is in regard to individual variations. Not every human genome sequence is the same. There is a wide variation, which in the first instance is manifest in our different appearances and capabilities. But it goes far deeper because it is also reflected in our different responses to invasion by microorganisms, to the development of cancer and to our susceptibility to disease in general. It will ultimately be possible, by analyzing individual genome sequences to construct a profile of such susceptibilities for every individual, a profile of the response to pharmaceuticals for every individual, and thus to tailor medicines to the needs of individuals.” See video below.
 
 
UK’s endeavors to transform genomic data into personal therapies

In 2013 the UK government set up Genomics England, a company charged with sequencing 100,000 whole genomes by 2017. In 2014, the government announced a £78m deal with Illumina, a US sequencing company, to provide Genomics England with next generation whole genome sequencing services. At the same time the Wellcome Trust invested £27m in a state-of-the-art sequencing hub to enable Genomics England to become part of the Wellcome Trust’s Genome Campus in Hinxton, near Cambridge, England. In 2015, the UK government pledged £215m to Genomics England.
 
DNA testing and cancer
DNA sequencing is simply the process of reading the code that is in any organism . . . It’s essentially a technology that allows us to extract DNA from a cell, or many cells, pass it through a sophisticated machine and read out the sequence for that organism or individual,” says David Bowtell, Professor and Head of the Cancer Genomics and Genetics Program at the Peter MacCallum Cancer Centre, Melbourne, Australia; see video below. “DNA testing has becomeincreasingly widespread because advances in technology have made the opportunity to sequence the DNA of individuals affordable and rapid  . . . DNA testing in the context of cancer can be useful to identify a genetic risk of cancer, and to help clinicians make therapeutic decisions for someone who has cancer,” says Bowtell, see video below.
 

What is DNA sequencing?


What are the advanteges of a person having a DNA test?

Need for National Genome Board
Despite significant investments by the UK government, Professor Davies, England’s Chief Medical Officer, complained in her 2017 Annual Report that genomic testing in the UK is like a “cottage industry” and recommended setting up a new National Genome Board tasked with making whole genome sequencing (WGS) standard practice in the NHS across cancer care, as well as some other areas of medicine, within the next 5 years.
 
USA’s endeavors to transform genomic data into personal therapies

In early 2015 President Obama announced plans to launch a $215m public-private precision medicine initiative, which involved the health records and DNA of 1m people, to leverage advances in genomics with the intention of accelerating biomedical discoveries in the hope of yielding more personalized medical treatments for patients. A White House spokesperson described this as “a game changer that holds the potential to revolutionize how we approach health in the US and around the world.
 

Data management challenges
The American plan did not seek to create a single bio-bank, but instead chose a distributive approach that combines data from over 200 large on-going health studies, which together involves some 2m people. The ability of computer systems or software to exchange and make use of information stored in such diverse medical records, and numerous gene databases presents a significant challenge for the US plan. According to Bowtell, “Data sharing is widespread in an ethically appropriate way between research institutions and clinical groups. The main obstacles to more effective sharing of information are the very substantial informatics challenges. Often health systems have their own particular ways of coding information, which are not cross compatible between different jurisdictions. Hospitals are limited in their ability to capture information because it takes time and effort. Often information that could be useful to researchers, and ultimately to patients, is lost, just because the data are not being systematically collected.” See video below.
 
 
 
China’s endeavors to transform genomic data into personal therapies

In 2016, the Chinese government launched a US$9bn-15-year endeavor aimed at turning China into a global scientific leader by harnessing computing and AI technologies for interpreting genomic and health data.  This positions China to eclipse similar UK and US initiatives.
 

Virtuous circle
Transforming genomic data to medical therapies is more than a numbers race. Chinese scientists are gaining access to ever growing amounts of human genomic data, and developing the machine-learning capabilities required to transform these data into sophisticated diagnostics and therapeutics, which are expected to drive the economy of the future.  The more genomic data a nation has the better its potential clinical outcomes. The better a nation’s clinical outcomes the more data a nation can collect. The more data a nation collects the more talent a nation attracts. The more talent a nation attracts the better its clinical outcomes.
 

The Beijing Genomics Institute
In 2010 China became the global leader in DNA sequencing because of one company: the Beijing Genomics Institute (BGI), which was created in 1999 as a non-governmental independent research institute, then affiliated to the Chinese Academy of Sciences, in order to participate in the Human Genome Project as China's representative. In 2010, BGI received US$1.5bn from the China Development Bank, and established branches in the US and Europe. In 2011 BGI employed 4,000 scientists and technicians. While BGI has had a chequered history, today it is one of the world’s most comprehensive and sophisticated bio depositories.

The China National GeneBank
In 2016 BGI-Shenzhen established the China National GeneBank (CNGB) on a 47,500sq.m site. This is the first national gene bank to integrate a large-scale bio-repository and a genomic database, with a goal of enabling breakthroughs in human health research. The gene-bank is supported by BGI’s high-throughput sequencing and bio-informatics capacity, and will not only provide a repository for biological collection, but more importantly, it is expected to develop a novel platform to further understand genomic mechanisms of life. During the first phase of its development the CNGB will have saved more than 10m bio-samples, and have storage capacity for 20 petabytes (20m gigabytes) of data, which are expected to increase to 500 petabytes in the second phase of its development. The CNGB represents the new generation of a genetic resource repository, bioinformatics database, knowledge database and a tool library, “to systematically store, read, understand, write, and apply genetic data,” says Mei Yonghong, its Director.

Whole-genome sequencing for $100
The CNGB could also help to bring down the cost of genomic sequencing. It is currently possible to sequence an individual's entire genome for under US$1,000, but the CNGB aims to reduce the price to US$152. Meanwhile, researchers at Complete Genomicsa US company acquired by BGI in 2013, which has developed and commercialized a DNA sequencing platform for human genome sequencing and analysis, are pushing the technology further to enable whole-genome sequencing for US$100 per sample. China's share of the world's sequencing-capacity is estimated to be between 20% and 30%, which is lower than when BGI was in its heyday, but expected to increase fast. “Sequencing capacity is rising rapidly everywhere, but it's rising more rapidly in China than anywhere else,” says Richard Daly, CEO, DNAnexus, a US company, which supplies cloud platforms for large-scale genomics data.

The intersection of genomics and AI
Making sense of 1m human genomes is a major challenge, says Professor Jian Wang, former BGI President and co-founder, who has started another company called iCarbonX. Also based in Shenzhen, the company is at the intersection of genomics and AI. iCarbonX has raised more than US$600m, and plans to collect genomic data from more than 1m people, and complement these data with other biological information including changes in levels of proteins and metabolites. This is expected to allow iCarbonX to develop a new digital ecosystem, comprised of billions of connections between huge amounts of individuals’ biological, medical, behavioural and psychological data in order to understand how their genes interact and mutate, how diseases and aging manifest themselves in cells over time, how everyday lifestyle choices affect morbidity, and how these personal susceptibilities play a role in a wide range of treatments.

iCarbonX is expected to gather data from brain imaging, biosensors, and smart toilets, which will allow real-time monitoring of urine and faeces. The Company’s goal is to be able to study the evolution of our genome as we age and design personalized health predictions such as susceptibilities to diseases and tailored treatment options. iCarbonX’s endeavours are expected to dwarf efforts by other US Internet giants at the intersection of genomics and AI.

 
Ethical challenges

China’s single-minded objective to turn its knowhow and experience of genome sequencing into personal targeted medical therapies has made it a significant global competitive force in life sciences. However, precision medicine’s potential to revolutionize advances in how we treat diseases confers on it moral and ethical obligations. For personal therapies to be effective, it is important that genomic data are complemented with clinical and other personal data. This combination of data is as personal as personal information gets. There could be potential harm to the tested individual and family if genomic information from testing is misused. Reconciling therapy and privacy is important, because privacy issues concerning patients' genomic data can slow or derail the progression of novel personal therapies to prevent and manage intractable diseases. The stakes are high in terms of biosecurity, as genomic research is both therapeutic and a strategic element of national security. While it is crucial to leverage genomic data for future health, economic and biodefense capital, these data will also have to be appropriately managed and protected. Part 2 of this Commentary dives into these challenges a little deeper, and describes some of China’s competitive advantages in the race to become the world’s preeminent nation in genomics and precision medicine. 
 
Takeaways

Despite the endeavours of the UK and US to remain at the forefront of the international competition to transform genomic data into personalized medical therapies for some of the worlds most common and intractable diseases, it seems reasonable to assume that China is on the cusp of becoming the most dominant nation in novel personalized treatments. Notwithstanding, China’s determination to assume the global frontrunner position in genomic science might have blunted its concerns for some of the ethical issues, which surround the life sciences. To the extent that this might be the case the future of humanity might well differ significantly from the generally accepted western vision. 
view in full page
  • A number of new studies on ovarian cancer show “promising” results for patients who develop chemo-resistance
  • A Dutch study uses conventional chemotherapeutics more intensively
  • Another study uses a new class of drug discovered by the UK’s Institute of Cancer Research
  • Genetic testing is playing an increasing role in the reduction of chemo-resistance
  • Since 2014 the Royal Marsden NHS Trust Hospital in London has employed genetic profiling of ovarian cancer patients
  • The UK’s Chief Medical Officer suggests that whole genome sequencing should become standard practice on the NHS across cancer care
  • A new class of chemotherapeutic agent is directed at targeting cancers with defective DNA-damage repair
  • Improvements in cancer care have been both scientific and organizational
  • Utilizing and sequencing the treatment options for ovarian cancer may have a significant impact on the overall survival rates of patients
  • Multidisciplinary teams are transforming ovarian cancer care 
 
Improving ovarian cancer treatment 

Part II

Part-1 described ovarian cancer, the difficulties of diagnosing the disease early, and the challenges of developing effective screening mechanisms for it in pre-symptomatic women. Here, in part-2, we report new studies, which hold out the prospect of improved treatment options for women living with ovarian cancer. Both Commentaries draw on some of the world’s most eminent ovarian cancer clinicians and scientists.
 
1

Established chemotherapy agents combined and used intensively

The first study we describe is Dutch, published in 2017 in the British Journal of Cancer. It reports findings of a pioneering type of intensive chemotherapy, which was effective in 80% of patients with advanced ovarian cancer and whose first line of chemotherapy had failed. Currently, such patients have few options because more than 50% do not respond to follow-up chemotherapy.
 
Intensive combinations
The study, led by Dr. Ronald de Wit, of the Rotterdam Cancer Institute, involved 98 patients who first responded to chemotherapy only later to relapse. Patients in the study were divided into three groups according to the severity of their condition, and treated with a combination of two well established chemotherapy agents:  cisplatin and etopside, but the new treatment used the drugs much more intensively than usual.
 
Usually, chemotherapy is delivered as a course of a number of 21-day sessions (cycles) over several months. Between cycles patients are given time to recover from the toxic side effects, including neurotoxicity, nephrotoxicity, ototoxicity, and chemotherapy-induced nausea and vomiting (CINV). In de Wit’s study the combined chemotherapy drugs were given intensively, on a weekly basis, along with drugs to prevent adverse side effects.
 
Findings
Among the group of women in de Wit’s study who were most seriously ill, 46% responded to the new treatment, compared with less than 15% for current therapies. The response rates of the two groups of women who were least ill to the new treatment were 92% and 91%. This compares to responses of 50% and 20 to 30% with standard therapies. Overall, 80% of the women's tumours shrank, and 43% showed a complete response, with all signs of their cancers disappearing.
 
Immediate benefit
"We were delighted by the success of the study. The new drug combination was highly effective at keeping women alive for longer, giving real hope to those who would otherwise have had very little . . . . We were worried the women would be too ill to cope with the treatment, but in fact, they suffered relatively few side effects. And since these drugs are readily available, there's no reason why women shouldn't start to benefit from them right away," says de Wit.
 
2
 
ONX-0801 study

The second study we report was presented at the 2017 American Society of Clinical Oncology (ASCO) meeting in Chicago. It describes findings of an experimental new treatment that was found to dramatically shrink advanced ovarian cancer tumors, which researchers suggest is, “much more than anything that has been achieved in the last 10 years”.
 
“Very promising” findings
Dr. Udai Banerji, the leader of the study, is the Deputy Director of Drug Development at the UK’s Institute of Cancer Research (ICR). Banerji and his team were testing a drug, known as ONX-0801, for safety, but found that tumors, in half of the 15 women studied, shrank during the trial. A response Banerji called, “highly unusual”, and “very promising”. The drug, which is, “a completely new mechanism of action,” could add, “upward of six months to the lives of patients with minimal side effects”. If further clinical studies prove the drug’s effectiveness, it could potentially be used in early-stage ovarian cancer where, “the impact on survival may be better,” says Banerji.
 
New class of drug
ONX-0801 is the first in a new class of drug discovered by the ICR, and tested with the Royal Marsden NHS Foundation Trust. It attacks ovarian cancer by mimicking folic acid in order to enter the cancer cells. The drug then kills these cells by blocking a molecule called thymidylate synthase. ONX-0801 could be effective in treating the large group of chemo-resistant sufferers for whom there are currently limited options. Additionally, because the new therapy targets cancer cells and does not affect surrounding healthy cells, there are fewer side effects. Further, experts have developed tests to detect the cells that respond positively to this new treatment, which means oncologists can identify those women who are likely to benefit from the therapy the most.
 
Cautious note
Although the study is said to be “very promising”, Michel Coleman, Professor of Epidemiology at the London School of Hygiene & Tropical Medicine, suggests caution in interpreting its findings as it is such a small study and while, “shrinkage of tumors is important . . . it is not the same as producing the hoped-for extension of survival for women with ovarian cancer.”
 
3
 
Genetic testing

Resistance to chemotherapy can be reduced by DNA testing to obtain an increased knowledge of the molecular mechanisms of ovarian cancer pathogenesis, which facilitate personalized therapies that target certain subtypes of the disease. “Some people choose to have DNA testing because either they have developed cancer or family members have,” says David Bowtell, Professor and Head of the Cancer Genomics and Genetics Program at Peter MacCallum Cancer Centre, Melbourne, Australia. “In the context of cancer, personalized medicine is the concept that we look into the cancer cell and understand for that person what specific genetic changes have occurred in their cancer. Based on those specific changes, for that person we then decide on a type of therapy, which is most appropriate for the genetic changes that have occurred in that cancer . . . . . Typically this involves taking a sample of the cancer, running it through DNA sequencing machines, and using bioinformatics to interpret the information. Then, the results, which include gene mutations need to be interpreted by a multidisciplinary team, in order to decide the best possible treatment options for that particular patient,” says Bowtell: see videos below.
.
 
How do genetic mutations translate into personalised medicine?


How is personalised medicine implemented?
 
Mainstreaming cancer genetics
Since 2014 the Royal Marsden NHS Trust Hospital in London has employed genetic profiling of ovarian cancer patients, and have used laboratories with enhanced genetic testing capabilities to streamline and speed up processing time, lower costs, and help meet the large and growing demand for rapid, accurate and affordable genetic testing. The program called, Mainstreaming Cancer Genetics, helps women cancer patients make critical decisions about their treatment options. Currently, fewer than 33% of patients are tested, but this study spearheaded the beginning of a significant change. In her 2017 Annual Report, Professor Dame Sally Davies, England’s Chief Medical Office suggested that within the next 5 years all cancer patients should be routinely offered DNA tests on the NHS to help them select the best personalized treatments.
 

Bringing genetic testing to patients
According to Nazneen Rahman, Professor and Head of the Division of Genetics and Epidemiology at the ICR, and Head of the Cancer Genetics Unit at the Royal Marsden Hospital, London, “There were two main problems with the traditional system for gene testing. Firstly, gene testing was slow and expensive, and secondly the process for accessing gene testing was slow and complex . . . . We used new DNA sequencing technology to make a fast, accurate, affordable cancer gene test, which is now used across the UK. We then simplified test eligibility and brought testing to patients in the cancer clinic, rather than making them have another appointment, often in another hospital.” 
 

More people benefiting from affordable rapid advanced genetic testing
Treatment strategies that improve the selectivity of current chemotherapy have the potential to make a dramatic impact on ovarian cancer patient outcomes. The Marsden is now offering genetic tests to three times more cancer patients a year than before the program started. The new pathway is faster, with results arriving within 4 weeks, as opposed to the previous 20-week waiting period. According to Rahman, “Many other centres across the country and internationally are adopting our mainstream gene testing approach. This will help many women with cancer and will prevent cancers in their relatives.” If the UK government acts on the recommendations of Davies, there could be a national center for genetic testing within the next 5 years.
 
4

PARP Inhibitors and personalized therapy
 
Since 2 seminal 2005 publications in Nature,  (Bryant et al, 2005; and Farmer et al, 2005) which reported the extremely high sensitivity of BRCA mutant cell lines to the enzyme poly (ADP-ribose) polymerase (PARP) inhibition, there has been a scientific race to exploit a new class of cancer drug called PARP inhibitors. The family of PARP inhibitors represents a widely researched and promising alternative for the targeted therapy of ovarian malignancies. Over the past few years, PARP inhibitors have successfully moved into clinical practice, and are now used to help improve progression-free survival in women with recurrent platinum-sensitive ovarian cancer.

 
Recent (PARP) approvals
In 2014, olaparib was the first PARP inhibitor to obtain EU approval as a treatment for ovarian cancer patients who had become resistant to platinum-based chemotherapy. In 2017, the FDA granted the drug ‘priority review’ as a maintenance therapy in relapsed patients with platinum-sensitive ovarian cancer while confirmatory studies are completed. In December 2016, the FDA granted ‘accelerated approval’ for rucaparib, another (PARP) inhibitor for the treatment of women with advanced ovarian cancers who have been treated with two or more chemotherapies, and whose tumors have specific BRCA gene mutations. 
 
Early in 2017, the drug niraparib was the first PARP inhibitor to be approved by the FDA for the maintenance treatment of adult patients with recurrent gynaecological cancers who are resistant to platinum-based chemotherapy.  The approval was based upon data from an international randomized, prospectively designed phase III clinical study, which enrolled 553 patients, and showed a clinically meaningful increase in progression-free survival (PFS) in women with recurrent ovarian cancer, regardless of BRCA mutation or biomarker status. In conjunction with the accelerated 2017 FDA approval for rucaparib, the FDA also approved a BRCA diagnostic test, which identifies patients with advanced ovarian cancer eligible for treatment with rucaparib.
 

New class of chemotherapies
PARP inhibitors may represent a potentially significant new class of chemotherapeutic agents directed at targeting cancers with defective DNA-damage repair. Currently, these drugs have a palliative indication for a relatively small cohort of patients. In order to widen the prospective patient population that would benefit from PARP inhibitors, predictive biomarkers based on a clearer understanding of the mechanism of action, and a better understanding of their toxicity profile will be required. Once this is achieved PARP inhibitors could to be employed in the curative, rather than the palliative setting.
 
5
 
The future of cancer care and multidisciplinary teams
 
According to Hani Gabra, Professor of Medical Oncology at Imperial College, London; and Head of AstraZeneca’s Oncology Discovery Unit, we now have “many options” for treating ovarian cancer. However, “how we utilize and sequence these options may have a significant impact on the overall survival of a patient. Better understanding of the disease through science is constantly turning up new options. For the first time in the last 5 years we are developing options in real time for patients. Patients almost are able to benefit from these options as they are relapsing from their disease. Keeping patients alive for longer allows them to access new treatments . . . It’s truly remarkable to see this in real time as a doctor,” says Gabra: see video.
 

A significant number of mostly private patients diagnosed with ovarian cancer draw comfort from the belief that they, “have the best oncologist”.  This view fails to grasp the challenges facing individual clinicians acting on their own to treat a devilishly complex disease such as ovarian cancer. “The main improvements in cancer care have been organizational and scientific.” says Gabra. “It is not enough to create new science and new treatments. It is also important to rigorously implement these. The most effective way to do this is via a ‘tumor board’ or a ‘multidisciplinary clinic or team’, where various specialists such as surgeons, radiotherapists, medical oncologists, pathologists, clinical nurse specialists, etc come together and discuss each individual patient. Such multidisciplinary discussion results in the best utilizations of currently available treatment options in the right sequence. It’s difficult to do this for a doctor acting on his or her own and making isolated decisions . . . Multidisciplinary decision-making has transformed cancer care,” says Gabra: see video.
 
 
Takeaways

This Commentary provides a flavor of some of the recent advances in ovarian cancer research and care, and suggests that treatment options have improved in the 4 years since Maurice Saatchi described ovarian cancer care as, “degrading, medieval and ineffective” leading “only to death”. However, it is worth stressing that care is both organizational and scientific, and multidisciplinary teams can transform care and prolong life.
view in full page

  • Ovarian cancer is a deadly disease that is challenging to diagnose and manage
  • Although it only accounts for 3% of cancers in women, it is the 5th leading cause of cancer death among women
  • If diagnosed and treated early before it spreads the 5-year survival rate is 92%
  • But only 15% of women with ovarian cancer are diagnosed early
  • The disease is hard to diagnose because it is rare, the symptoms are relatively benign, and there is no effective screening
  • Ovarian cancer is not one disease, but a collection of subtypes each demanding specific treatment pathways
  • Gold standard treatment is surgery followed by chemotherapy
  • A large proportion of patients develop resistance to chemotherapy
 
Improving ovarian cancer treatment

Part I
 
Are things beginning to improve for people living with ovarian cancer? When the British advertising magnate Lord Maurice Saatchi’s wife died of ovarian cancer in 2012 he described her treatment as, “degrading, medieval and ineffective” leading “only to death”. Ovarian cancer patients have long had limited treatment options, which have not changed much in the past two decades, but recently things have begun to change.

 
In this Commentary
 
This is the first of a 2-part Commentary on ovarian cancer, which briefly describes the condition, explains the difficulties of diagnosing it early, and discusses some of the challenges of developing effective screening mechanisms for the cancer in pre-symptomatic women. Part 2, which will follow separately next week, reports new studies, which hold out the prospect of improved treatment options for women living with ovarian cancer. It also suggests that improvements in ovarian cancer care are both organizational and scientific. Experts believe that they now have a number of treatment options available to them. Utilising and sequencing these appropriately can have a significant impact on the overall survival rates of patients. Multidisciplinary teams, which are not universally available to all ovarian cancer patients, bring together all specialisms involved in the therapeutic pathway to consider and suggest optimal treatment steps for individual patients, and make a significant contribution to improved ovarian cancer care. Both Commentaries draw on some of the world’s most eminent ovarian cancer clinicians and scientists.
 
Ovarian cancer: a complex and deadly disease
 
The ovaries are a pair of small organs located low in the stomach that are connected to the womb and store a woman’s supply of eggs. Ovarian cancer is driven by multicellular pathways, and is better understood as a collection of subtypes with changing origins and clinical behaviors, rather than as a single disease. The tumors often have heterogeneous cell populations, which form unique microcellular environments. The prevalence of ovarian cancer among gynecological malignancies is rising, and is one the most deadly and hard to treat malignancies. While the disease only accounts for about 3% of cancers in women, it is one of the most common types of cancer in women, the 5th leading cause of cancer-related death among women, and the deadliest of gynecologic cancers. The risk of ovarian cancer increases with age. It is rare in women younger than 40, most ovarian cancers develop after menopause. 50% of all ovarian cancers are found in women 63 or older. According to the American Cancer Society the five-year survival rate for all ovarian cancers is 45%. Most women are diagnosed with late-stage ovarian disease and, the 5-year survival rates for these patients are roughly 30%. Age adjusted survival rates of ovarian cancer are improving in most developed countries. For instance, between 1970 and 2010, the 10-year survival rates for ovarian cancer in England increased by 16%, and the 5-year survival rates have almost doubled. This is because of the favorable trends in the use of oral contraceptives, which were introduced early in developed countries. Declines in menopausal hormone use may also have had a favorable effect in older women as well as improved diagnosis, management and therapies. According to Public Health England, over the past 20 years the incidence of ovarian cancer in England has remained fairly stable, although it has decreased slightly in the last few years. Between 2008 and 2010 in England, 36% of some 14,000 women diagnosed with ovarian cancer died in the first year, and more than 1,600 died in the first month. There were 7,378 new cases of ovarian cancer in the UK in 2014 and more than 4,000 women died from the disease.
 
Benign symptoms difficult to diagnose

If ovarian cancer is diagnosed and treated early before it spreads from the ovaries to the abdomen, the 5-year relative survival rate is 92%. However, only 15% of all ovarian cancers are found at this early stage.  This is because it is hard to diagnose since the disease is so rare, the symptoms are relatively benign, and there is no effective screening. As a result, the illness tends not to be detected until the latter stages in around 60% of women, when the prognosis is poor. In about 20% of cases the disease is not diagnosed until it is incurable. Feeling bloated most days for three weeks or more is a significant sign of ovarian cancer. Other symptoms include: feeling full quickly, loss of appetite, pelvic or stomach pain, needing to urinate more frequently than normal, changes in bowel habit, feeling very tired, and unexplained weight loss.
 
“Tumors go from the earliest stage 1 directly to stage 3”
In the video below Hani Gabra, Professor of Medical Oncology at Imperial College, London; and Head of AstraZeneca’s Oncology Discovery Unit says, “Ovarian cancer is often diagnosed late because in many cases the disease disseminates into the peritoneal cavity almost simultaneously with the primary declaring itself. Unlike other cancers, the notion that ovarian cancer progresses from stage 1 to stage 2, to stage 3 is possibly mythological. The reality is, these cancer cells often commence in the fallopian tube with a very small primary tumor, which disseminates directly into the peritoneal cavity. In other words, the tumors go from the earliest of stage 1 directly to stage 3."
 
 
Ovarian cancer screening and CA-125

For years scientists have been searching for an effective screening test for ovarian cancer in pre-symptomtic women. The 2 most common are transvaginal ultrasound (TVUS) and the CA-125 blood test. The former uses sound waves to examine the uterus, fallopian tubes, and ovaries by putting an ultrasound wand into the vagina. It can help find a tumor in the ovary, but cannot tell if the tumor is cancerous or benign. Most tumors identified by TVUS are not cancerous. So far, the most promising screening method is CA-125, which measures a protein antigen produced by the tumor.
 
CA-125 studies
To-date, 2 large ovarian cancer screening studies have been completed: one in the US, and another in the UK. Both looked at using the CA-125 blood test along with TVUS to detect ovarian cancer. In these studies, more cancers were found in the women who were screened, and some were at an early stage. But the outcomes of the women who were screened were no better than the women who were not screened: the screened women did not live longer and were not less likely to die from ovarian cancer.

Another study published in 2017 in the Journal of Clinical Oncology screened 4,346 women over 3 years at 42 centers across the UK, undertook follow-up studies 5 years later, and came to similar conclusions as the 2 previous studies. Further, “there are a number of non-ovarian diseases, which can cause elevated CA-125’s. Breast cancer, endometriosis, and irritation of the peritoneal cavity can all cause elevated CA-125,” says Michael Birrer, Director of Medical Gynecologic Oncology at the Massachusetts General Hospital and Professor of Medicine at Harvard University.


Controversial findings
Findings from screening tests using CA-125 can give false positives for ovarian cancer, and this puts pressure on patients to have further, often unnecessary interventions, which sometimes include surgery. Also, the limitations of the CA-125 test mean that many women with early stage ovarian cancer will receive a false negative from testing, and not get further treatment for their condition. Thus, the potential role of CA-125 for the early detection of ovarian cancer is controversial, and therefore it has not been adopted for widespread screening in asymptomatic women.
 
In the video below Birrer explains that, “pre-operatively and during therapy physicians will usually check CA-125 as a measure of the effectiveness of the therapy. At the completion of therapy one would anticipate that the CA-125 would be normal. After that, it is somewhat controversial as to whether follow-up with CA-125 to test for recurring disease is clinically relevant,” says Birrer. Since the discovery of CA-125 in 1981, there has been intense research focus on novel biomarkers for cancer, and significant scientific advances in genomics, proteomic, and epigenomics etc., which have been extensively used in scientific discovery, but as yet no new major cancer biomarkers have been introduced to practicing oncologists. 

 
Limited treatment options

As most ovarian cancer patients are diagnosed late when the disease has already spread, treatment options are limited. The first line treatment is surgery called debulking, (also known as cytoreduction or cytoreductive surgery), which is the reduction of as much of the volume (bulk) of a tumor as possible. 
 
Be prepared for extensive surgery
Whether a patient is a candidate for surgery depends on a number of factors including the type, size, location, grade and stage of the tumor, pre-existing medical conditions, and in the case of a recurrence, when the last cancer treatment was performed, as well as general health factors such as age, physical fitness and other medical comorbidities. People diagnosed with ovarian cancer, “need to be prepared to have extensive surgery because the real extent of the tumor dissemination cannot be detected by conventional imagining pre-operatively,” says Professor Christina Fotopoulou, consultant gynaecological oncologist at Queen Charlotte's & Chelsea Hospital, London: see video below. 
 
 
Platinum resistance

Surgery is usually followed by chemotherapy. There are more than 100 chemotherapy agents used to treat cancer either alone or in combination. Chemotherapy drugs target cells at different phases of the process of forming new cells, called the cell cycle. Understanding how these drugs work helps oncologists predict, which drugs are likely to work well together. Clinicians can also plan how often doses of each drug should be given based on the timing of the cell phases. Chemotherapy drugs can be grouped by their chemical composition, their relationship with other drugs, their utility in treating specific forms of cancer, and their side effects.  
 
You can reduce chemotherapy resistance by using a combination of drugs that target different processes in the cancer so that the probability that the cancer will simultaneously become resistant to both drugs is much lower than if you use one drug at a time, ” says David Bowtell,  Professor and Head of the Cancer Genomics and Genetics Program at Peter MacCallum Cancer Centre, Melbourne, Australia: see video:
 
 
Improving the chemotherapy agent cisplatin
The standard chemotherapy treatment for ovarian cancer is a combination of a platinum compound, such as cisplatin or carboplatin, and a taxane, which represents a class of drug originally identified from plants. Since cisplatin’s discovery in 1965 and its FDA approval in 1978, it has been used continuously in treatments for several types of cancer, and is best known as a cure for testicular cancer. Scientists have searched for ways to improve the anti-tumor efficacy of platinum based drugs, reducing their toxicity, strengthening them against resistance by expanding the class to include several new analogues of cisplatin, and putting these through clinical studies to broaden the different types of cancers against which they can be safely used.
 
Slow progress transitioning research into clinical practice
Despite these endeavors, platinum resistance remains a significant clinical challenge. Between 55 and 75% of women with ovarian cancer develop resistance to platinum based chemotherapy treatments. Significant research efforts have been dedicated to understanding this, but there has been relatively slow progress transitioning the research into effective clinical applications. According to Birrer, “the mechanism of platinum resistance from a molecular standpoint has not been well defined. It is likely to be heterogeneous, which means that each patient’s tumor may be slightly different. The hope is for targeted therapies and personalised medicine to have a chance of overcoming this, in that we could characterize the mechanism of the platinum resistance and apply and target therapy.”
 
2 theories of platinum resistance
In the video below, Birrer posits 2 theories to explain platinum resistance. “One suggests that under the influence of platinum the tumor changes and becomes resistant. Another suggests that there are 2 groups of cells to begin with. The vast majority of the tumor is sensitive, but there are small clusters of resistant cells. Once you kill the sensitive cells you have only the resistant cells left. Although these 2 theories have been around for about 25 years, there are no definitive data to suggest which theory is right. I have a personal scientific bias to think that the resistant cells are present at the time that we start the therapy. Being able to identify and characterize these cells upfront would be a radical breakthrough because then we would be able to target them at a time when they are only a small portion of the tumor,” says Birrer.
 
 
Takeaways

Saatchi is right; for decades ovarian cancer treatment has been wanting, but studies we describe in part-2 of this Commentary suggest that the tide might be turning for people living with ovarian cancer. So don't miss part-2 next week!
 
 
view in full page