Tag

Tagged: commentary

Sponsored
 
  • The clandestine status of cannabis and its attendant risks are beginning to erode
  • The idea of cannabis as an evil drug is a relatively recent phenomenon
  • Plants have been the historical source of medicine for most of human history, and cannabis is no exception
  • There is a large and growing pharmacological and clinical interest in cannabis as medicine
  • Two distinct legal markets for cannabis are emerging: the tightly regulated pharmaceutical market and the less regulated market of herbal preparations
  • The FDA has approved cannabis-related drugs, which are used for a number of indications
  • There may be a recognizable pathway leading to more cannabis compounds becoming medicine
  • To become accepted as a medicine that doctors prescribe, pharmacists supply and healthcare providers support, cannabis compounds need to demonstrate their biochemical uniformity, stability, safety and efficacy
 
Medical cannabis and modern healthcare

Today, cannabis medicine for most people involves the black market with its attendant risks and lack of quality control. But this is changing to a more desirable alternative. As legal opinion changes, and clinical studies increase; the clandestine nature of cannabis and its attendant risks are beginning to erode, and two distinct legal markets for medical cannabis are emerging. One is the tightly regulated pharmaceutical market where medical cannabis provides safe and effective pharmaceutical solutions, which doctors prescribe, pharmacists’ supply, and healthcare providers support, and the other is the less regulated market of herbal preparations. A report by ArcView Market Research reported that 2016 annual sales of legal cannabis in the US grew by 25%, to US$6.7bn, and projects sales will reach US$21.8bn by 2020. This Commentary focuses on the pharmaceutical market, which relies on randomized clinical studies to demonstrate biochemical consistency, safety and efficacy.
 
The cannabis plant and its main properties

Cannabis is a genus of an annual herbaceous flowering plant, which includes 2 familiar sub-species or chemovars: ‘C sativa’, and ‘C indica’. Modern molecular techniques applied to the taxonomic classification of cannabis have resulted in many more classifications, which, in time, will become increasingly relevant as the plant’s medicinal qualities are increasingly identified. Cannabis is an indigenous plant of central Asia and India, but can be grown in almost any climate in any part of the world, and is increasingly being cultivated by means of indoor hydroponic technology. The cannabis plant contains more than 100 cannabinoids, which are chemical compounds secreted by cannabis flowers. About 60 of these have been identified as pharmacologically active, with the primary active cannabinoids being delta-9-tetrohydro-cannabinol, commonly known as THC, and cannabidiol, which is commonly known as CBD. THC provides the principal mind-altering ingredient, while CBD does not affect the mind or behavior.
 
Cannabis as medicine

Medical cannabis refers to using extracts from the cannabis plant - cannabinoids - to treat a range of conditions or their symptoms. Cannabinoids can be administered orally, sublingually, or topically; they can be smoked, inhaled, mixed with food, or made into tea. When cannabis is consumed, cannabinoids bind to receptor sites throughout the brain and body. Different cannabinoids have different effects depending on which receptors they bind to. For example, THC binds with receptors in the brain called CB-1, while CBD has a strong affinity for CB-2 receptors located throughout the body. By aiming the right cannabinoid at the right receptors, different types of relief are achievable. THC is the most active cannabinoid; it has dominated research into medical cannabis and resulted in FDA-approved drugs. Although CBD is one of the least active cannabinoids, it has come to dominate more recent research into medical cannabis as it is considered to have a relatively wide scope of potential medical applications with fewer side effects than THC.
 
Pot-ted history

Plants have been the historical source of medicine for most of human history, and continue to account for the base material of about 25% of modern pharmaceuticals. Approved medicines of botanical origin are relatively common, but require evidence-based randomized clinical studies to demonstrate their biochemical uniformity, stability, safety and efficacy. Medical cannabis is no exception, and the FDA has approved drugs derived from cannabinoids and synthetic cannabinoids. However, regulators have not approved the entire cannabis plant as medicine because there are insufficient clinical studies to demonstrate its benefits against its potential risks to patients it is meant to treat.

For centuries the cannabis plant has been used throughout the world for medicinal purposes. Only in recent history has it acquired the status of a dangerous drug and banned. Its first recorded use is 4000 BC when an extract from the cannabis plant was used in China as an anesthetic during surgery. The Chinese went on to use cannabis compounds extensively for a range of conditions including malaria, constipation, rheumatic pains, "absentmindedness" and "female disorders."
 
From China, cannabis travelled throughout Asia into the Middle East, Africa, Europe, and eventually to the US. Galen, a prominent Greek doctor and scientist in the Roman Empire, noted cannabis as a remedy. In India it was used to lower fevers, quicken the mind, induce sleep, cure dysentery, stimulate appetite, improve digestion, relieve headaches, and cure venereal disease. The Vikings and medieval Germans used cannabis for toothache, and for relieving pain during childbirth. In Africa it was used for a variety of fevers including malaria. Despite its extensive medicinal use in early history, there were warnings against the over-use of cannabis as it was said to result in “seeing demons”.

 
Opinion changing

The idea of cannabis as an evil drug is a relatively recent phenomenon. Despite its contemporary clandestine status, there is a large and growing pharmacological and clinical interest in cannabis as medicine, and a recognizable pathway leading to its return to mainstream medicine. As early as 1985 the FDA approved cannabinoids as medicine. As of June 2016, 25 American states and Washington DC, have legalized cannabis for medical use. Germany is now expected to follow suit. In the UK, more than half of its national parliamentarians, including the former deputy Prime Minister, want to see the legalisation of medical cannabis. In March 2017, Oxford University announced that it is to launch a £10m global centre of excellence in cannabinoid research. The program, which is a partnership between the University and Kingsley Capital Partners, a private equity business based in London, will examine the role of cannabis medicines in treating pain, cancer and inflammatory diseases.
  
FDA approved

The FDA has approved two cannabis-related drugs: dronabinol and nabilone. The former contains the psychoactive compound THC extracted from the resin of C-sativa. The latter contains a synthetic cannabinoid, which mimics THC; the primary psychoactive compound found naturally occurring in cannabis. Both treat chemotherapy-induced nausea and vomiting (CINV), and extreme weight loss caused by HIV/AIDS, among a number of other indications.

Nabiximols, a CBD extract of cannabis, has been approved in 27 countries as a mouth spray to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms of multiple sclerosis. Although it has not yet undergone clinical studies, scientists have recently developed Epidiolex, a CBD-based liquid drug to treat certain forms of childhood epilepsy.

 
Chemotherapy-induced nausea and vomiting
 
Chemotherapy-induced nausea and vomiting (CINV), is one of the most common and feared adverse events that can be experienced by cancer patients. Its occurrence depends on the dose and the type of chemotherapy agent used, but it tends to be more prevalent in anxious woman under 50 who do not drink alcohol, and who have a history of sickness during pregnancy. Despite advances in the prevention and treatment of emesis, of the 70% to 80% of cancer patients who experience CINV, many delay or refuse future chemotherapy treatments, and contemplate stopping all treatments because of fear of further nausea and vomiting. 
 
There are several drug classes for the prevention and management of CINV. In 1985 the FDA approved a cannabinoid, dronabinol, for the treatment of CINV in patients who have failed to respond adequately to conventional antiemetic treatment. The number of people taking cannabinoids for therapeutic purposes is increasing, but very few medicines based on cannabis have yet been developed on rigorous scientific principles. Ahmed Ahmed, professor of gynaecological oncology at Oxford says, “This field holds great promise for developing novel therapeutic opportunities for cancer patients.
 
The endogenous cannabinoid system is a significant pathway involved in the emetic response. Cannabinoids can reduce or prevent chemotherapy-induced emesis by acting at central CB-1 receptors by preventing the pro-emetic effects of endogenous compounds, such as dopamine and serotonin. In addition, by acting as an agonist to CB-1, cannabinoids used as a treatment result in an antiemetic effect. Notwithstanding, few studies have evaluated medical cannabis alone or in combination to treat CINV. The published studies that have been conducted have mixed results. THC has to be dosed relatively highly, so that resultant adverse effects may occur comparatively frequently. Some investigations suggest that THC in low doses improves the efficacy of other antiemetic drugs if given together.

 
Some additional indications

In addition to its ability to reduce nausea, THC is effective as an appetite stimulant in both healthy and sick individuals, and is used to boost appetite in patients with cancer, HIV-associated wasting syndrome, and patients with anorexia.

Another common use of medical cannabis is as an analgesic. Studies suggest that THC activates pathways in the central nervous system, which work to block pain signals from being sent to the brain. THC has been shown to have some effect against neuropathic, cancer and menstrual pain, headache, and chronic bowel inflammation.

The high, which users get from cannabis THC is also associated with temporary loss of memory. For most people this would be concerning, but for people with post-traumatic stress disorder (PTSD), memory loss can be positive. PTSD is a chronic, disabling mental health condition triggered by a significant event, and results in traumatic flashbacks, nightmares, and emotional instability. A 2013 study published in the journal Molecular Psychiatry reported a correlation between the quantity of cannabinoid CB-1 receptors in the human brain and PTSD, and concluded that oral doses of THC could help relieve PTSD-related symptoms.

Review of clinical studies

In 2015 a systematic review of the pros and cons of cannabinoids was published in the Journal of the American Medical Association. The paper analyzed 79 clinical studies of cannabinoids, involving 6,462 participants, for a number of indications including: CINV, chronic pain, appetite stimulation in HIV/AIDS, spasticity due to multiple sclerosis or paraplegia, depression, anxiety disorder, sleep disorder, psychosis, glaucoma, and Tourette syndrome.

Most studies in the review showed improvement in symptoms that were correlated with cannabinoids, compared with a placebo. However, symptoms positively correlated with cannabinoids did not reach statistical significance in all studies. The review reported that there was an increased risk of short-term adverse effects associated with cannabinoids, some of which were severe. Common among these were dizziness, dry mouth, nausea, fatigue, somnolence, euphoria, vomiting, disorientation, drowsiness, confusion, loss of balance, and hallucination.

The review concluded that, “There was moderate-quality evidence to support the use of cannabinoids for the treatment of chronic pain and spasticity. There was low-quality evidence suggesting that cannabinoids were correlated with improvements in nausea and vomiting due to chemotherapy, weight gain in HIV infection, sleep disorders, and Tourette syndrome. Cannabinoids were also correlated with an increased risk of short-term adverse effects.”

 
Clinical studies design challenges

Although cannabis compounds are currently used to treat disease or alleviate symptoms for a number of conditions, their efficacy for some specific indications is not altogether clear. This reflects the relative dearth of clinical studies that have been carried out on cannabinoids. Further, there are several design challenges associated with clinical studies that involve THC. One is whether cannabis components beyond THC contribute to its medicinal effects. Another is connected with the ability of studies to provide adequate blinding for psychoactive compounds such as THC. Clinical studies generally are known to show a degree of subjective improvement associated with the additional attention participants in a study are given, and this is compounded when a clinical study outcome measures subjective responses, such as pain and mood, as in the case of THC.
 
Gold standard
 
To be accepted by doctors, supplied by pharmacists and supported by healthcare providers, a medical cannabis product must be standardized and consistent, and display a quality equal to any recognized pharmacological compound. It must have a secure supply chain, possess an appropriate low-risk delivery system, and have minimal adverse effects. Although there are entities working to bring this about, the fact remains that the overwhelming majority of cannabis available today is unregulated, and this provides significant challenges, which include the biochemical variability of one chemovar to another, the possibility of the presence of bacteria and pesticides, and the variation in potency.
 
Nabiximols
 
A significant success of medical cannabis is nabiximols, an oromucosal spray produced from whole cannabis extracts, which is used to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms of multiple sclerosis. Currently nabiximols is available in 27 countries, is biochemically uniform and provides an easy-to-use, reliable delivery system with immediate onset, allowing a therapeutic window for control of symptoms without intoxication. This suggests a gold standard benchmark, which other cannabis-based medicines will be required to follow.

 
Takeaways
 
There seems to be a clear pathway for medical cannabis to increase in importance in modern pharmacology. Modern technology, which facilitates advanced cultivation and extraction processes appear to be well positioned to facilitate the creation and development of cannabis products to target specific medical needs for maximum relief of a number of chronic conditions.
 
view in full page
 
  • Each year about 1.7m women are diagnosed with breast cancer worldwide and over 0.5m die from the condition
  • Between 5% and 10% of these breast cancers result from harmful gene mutations
  • BRCA1 and BRCA2 gene mutations are the most common cause of hereditary breast cancer
  • 45% to 85% of women with a BRCA mutation will develop breast cancer in their lifetime compared to 12% of women in the general population
  • Most women do not know if they have a harmful BRCA mutation
  • Testing for the BRCA gene is now affordable, fast and accessible
  • Surgical interventions of women with BRCA mutations can significantly reduce their risk of developing breast cancer and substantially increase cancer survival
  • Genetic test results for breast cancer are fraught with uncertainty because testing reveals the likelihood of developing cancer rather than a certain fate
  • Research suggests that BRCA test results are not being clearly communicated to women
  • Best practice demands that expert counselors discuss genetic testing and help interpret results
 
Breast cancer and harmful BRCA gene mutations


Few things frighten women more than discovering a lump in one of her breasts The standard treatment: surgery, followed by radio- and chemotherapy, can be disfiguring, painful, sometimes unsuccessful, and the impact of the disease is felt by far more individuals than just those who have the diagnosis.The good news is that over the past 30 years breast cancer survival rates in most developed countries have been improving, largely due to screening, earlier diagnosis and improved treatments. The bad new is that in most developed countries it is twice as likely for a woman to be diagnosed with breast cancer than 60 years ago.
 
Harmful BRCA genes mutations

5 to 10% of breast cancers are thought to be due to gene mutations, and harmful BRCA mutations account for 20 to 25% of these. Women who inherit the BRCA1 mutations have a 60 to 90% risk of developing breast cancer in their lifetime, and those who inherit BRCA2 mutations increase their risk of breast cancer by 45 to 85%, compared to 12% of women in the general population. Most women do not know if they carry the harmful BRCA mutation, but if they discover they do, many elect to have a bilateral mastectomy. This is a significant procedure with potential risks and side effects, but can reduce your mortality risk by about 50%.
 
The gold standard screening for breast cancer is an x-ray picture of the breast (mammography), but increasingly women are turning to genetic testing as their awareness of the harmful BRCA mutations increase, and genetic testing becomes more accessible and affordable. However, results from these tests are not straightforward, and often not communicated well. This can increase the anxiety in women with suspected breast cancer, and make them elect to have unnecessary interventions and procedures.
 
This Commentary describes how advanced genetic testing together with expert counselling help women improve their management of breast cancer.
 

Breast Cancer
 
Cancer is a group of diseases that cause cells in your body to change and grow out of control: they mutate. Most types of cancer cells eventually form a lump or mass called a tumor, and are named after the part of the body where the tumor originates, e.g. “breast cancer”, although this convention is changing with the development of targeted personalized medicine. The exact cause of breast cancer is unknown, but the overwhelming majority result from some combination of environment, lifestyle, and genes. Breast cancer affects about 1 in 8 women at some point during their life, usually after the menopause, and is the most common cancer in women.  The majority of breast cancers begin in the parts of the breast tissue that are made up of glands for milk production, called lobules, and ducts that connect the lobules to the nipple. The remainder of the breast is made up of fatty, connective, and lymphatic tissue. Most invasive breast cancers (those that have spread from where they started) are found in women 55 and older. Women with a family history of the disease have an increased risk of getting breast cancer. Each year about 1.7m women are diagnosed with breast cancer worldwide, and over 0.5m die from the condition. However in developed economies more and more women survive the disease. In the US, for instance, the average 5-year survival rate for people with breast cancer is 89%. The 10-year rate is 83%, and the 15-year rate is 78%. Other developed countries have similar success rates. What makes breast cancer fatal is if it spreads to the bones, lungs, liver and other organs. Early detection in order to improve breast cancer outcomes remains the cornerstone of the condition’s management. Although breast cancer is thought to be a disease of the developed world, it is increasing rapidly in emerging countries where the majority of cases present later and die earlier than women in developed countries: almost 50% of breast cancer cases and 58% of deaths occur in emerging economies. This is because women generally have relatively poor knowledge of the risk factors, symptoms and methods for early detection. Also, they experience cancer fatalism, believe in alternative medicine, and lack of autonomy in decision making, which often results in delays in seeking or avoidance of evidence-based medicine.
 
Mammography
 
Mammography, which has long been the mainstay of breast cancer detection, is a specific type of breast imaging that uses low-dose x-rays to detect small changes in the breast before there are any other signs or symptoms of the disease when it is most treatable. Mammography is noninvasive, relatively inexpensive, and has reasonable sensitivity (72–88%), which increases with age. It can also be used to detect and diagnose breast disease in women experiencing symptoms such as a lump, pain, or nipple discharge. If breast cancer is found at an early stage, there is an increased chance for breast-conserving surgery and a better prognosis for long-term survival. Most developed countries operate breast-screening programs, which regularly provides mammography for women between certain ages.
 
Advances in mammography

In recent years, mammography has undergone increased scrutiny for false positives and excessive biopsies, which increase radiation dosage, cost and patient anxiety. In response to these challenges, new forms of mammography screening have been developed, including; low dose mammography, digital mammography, computer-aided detection, tomosynthesis, which is also called 3-D mammography, automated whole breast ultrasound, molecular imaging and MRI. Notwithstanding, there is increasing awareness of subpopulations of women for whom mammography has reduced sensitivity. More recently, women have turned to genetic testing to gain a better understanding of their risk of inherited breast cancer.
 
Genes

Every cell in your body contains genes. These contain the genetic code for your body, which not only determines the color of your eyes and hair etc., but also provides information that affects how the cells in your body behave: for example, how they grow, divide and die. Information in your genes is inherited from both parents, and you pass on this information to your children. A change in your genetic code that affects the function of a gene is called a mutation. Many inherited gene mutations do not have any effect on your health, but some do; the BRCA1 and BRCA2 mutations account for 20 to 25% of all inheritable female breast cancers and 15% of ovarian cancers.
  
BRCA genes

In normal cells, BRCA genes are tumor suppressor genes that assist in preventing cancer developing by making proteins that help to keep cells from growing abnormally. Mutated versions of BRCA genes cannot stop abnormal growth, and this can lead to cancer. Mutated BRCA genes have a higher prevalence in certain ethnic groups, such as those of Ashkenazi Jewish descent.

In the video below Professor Robert Leonard, a medical oncologist and an authority on breast cancer, describes how BRCA genes are influential in breast and ovarian cancer risk. BRCA1 runs in families and may also increase a woman’s risk of developing fallopian tube and peritoneal cancers. BRCA2 also runs in families, and is more breast cancer-specific, but a less commonly inherited abnormality. Both or either of these genes may not be detectably abnormal even in a family with a strong inherited pattern of breast cancer, but there is a significant possibility that you will find them in people with a family history of breast and ovarian cancer. Breast and ovarian cancers associated with BRCA mutations tend to develop at younger ages than their non-hereditary counterparts.

 
 
Enhanced risk when family members have cancer
 
In December 2013, the US Preventive Services Task Force recommended that women who have family members with breast, ovarian, fallopian tube, or peritoneal cancer be evaluated to see if they have a familial history that is associated with an increased risk of a harmful mutation in one of the BRCA genes. Compared to women without a family history of cancer, risk of breast cancer is about 2 times higher for women with a close female relative who has been diagnosed with cancer; nearly 3 times higher for women with two relatives, and nearly 4 times higher for women with three or more relatives. Risk is further increased when the affected relative was diagnosed at a young age. Notwithstanding, the Preventive Services Task Force recommends against BRCA testing for women with no family history of cancer.
  
The Angelina Jolie effect

The Hollywood actress and filmmaker Angelina Jolie lost her grandmother and aunt to breast cancer and her mother to ovarian cancer. After discovering that she carried a maternally inherited pathogenic BRCA1 mutation, and being told that she had an 87% chance of developing breast cancer, and a 50% chance of ovarian cancer, Jolie elected to have her breasts, ovaries and fallopian tubes removed. After surgery her risk of developing breast cancer in later life fell to 5%.
 
In May 2013, Jolie described her decision in a New York Times (NYT) article,  “I am writing about it now because I hope that other women can benefit from my experience . . . . . Cancer is still a word that strikes fear into people’s hearts, producing a deep sense of powerlessness. But today it is possible to find out through a blood test whether you are highly susceptible to breast and ovarian cancer, and then take action.”
 
Over testing of by low-risk women
 
Findings published in December 2016 in the British Medical Journal suggest that tests for the BRCA genes shot up by 64% following Jolie’s article. Researchers analysed data on US health insurance claims from more than 9m women between 18 and 64, and suggested that in just 2 weeks following Jolie’s NYT disclosure, 4,500 additional BRCA tests were carried out, which cost the US healthcare system some US$13.5m. Interestingly, increased testing rates were not accompanied by a corresponding increase in mastectomy rates, which suggests that additional testing did not identify new BRCA mutations. Thus, the Angela Jolie effect might have encouraged over-testing among low-risk women.
 
Mindful of her influence on women’s decisions, in 2015 Jolie wrote another NYT article in which she attempted to correct her earlier support for radical risk reduction surgery for women carriers of BRCA mutations. She said that because surgery worked for her, it is not necessarily the optimal therapeutic pathway for all women, and stressed that non-surgical treatments could be more appropriate.
 
Traditional genetic testing for breast cancer risk was slow and expensive

Genetic testing to detect BRCA mutations has been available since 1996, but for many years it was under-used because of its scarcity, high cost, and the length of time it took to produce a result. The rapid development and plummeting costs of genetic testing, and a 2013 US Supreme Court ruling, which invalidated the patents held by Myriad Genetics Inc., which restricted BRCA testing, have resulted in the growth and accessibility of genetic testing.
 
BRCA testing is not straightforward

There are hundreds of mutations in the BRCA1 and BRCA2 genes that can cause cancer. Several different tests are available, including tests that look for a known mutation in one of the genes (i.e., a mutation that has already been identified in another family member), and tests that check for all possible mutations in both genes. Commercial laboratories usually charge between US$450 and US$5,000 to carry out BRCA testing, depending on whether you are being tested for only a specific area(s) of a gene known to be abnormal or if hundreds of areas are being examined within multiple genes. Tests that use traditional technology take several months to report findings. This means that even if a woman is tested at the time of diagnosis, she might not know the results before she has to decide on treatment.
 
Importance of regulated testing laboratories

Testing for the BRCA genes usually involves a blood sample taken in a doctor’s clinic and sent to a commercial laboratory. In 1988, the US Congress passed the Clinical Laboratory Improvement Amendments (CLIA) to ensure quality standards, and the accuracy and reliability of results across all testing laboratories. Since then, all legitimate genetic testing in the US is undertaken in CLIA-approved facilities. During testing for BRCA mutations, the genes are separated from the rest of the DNA, and then scanned for abnormalities. Unlike other clinical screening such as HIV tests and colonoscopies, which provide a simple positive or negative result; genetic testing is fraught with uncertainty because it reveals the likelihood of developing cancer rather than a certain fate.
 
BRCA1 and BRCA2 genetic test results
 
A positive BRCA test result indicates that you have inherited a known harmful mutation in the BRCA1 or BRCA2 gene. This means that you have an increased risk of developing breast and ovarian cancers, but it does not mean that you will actually develop cancer. Some women who inherit a harmful BRCA mutation will never develop cancer. A positive test result may create anxiety and compel clinicians to perform further tests and women to undergo premature and unnecessary clinical interventions, other women in a similar situation will opt for regular screening.
 
The potential benefits of a true negative result include a sense of relief regarding your future risk of cancer, learning that your children are not at risk of inheriting the family's cancer susceptibility, and that a range of interventions may not be required. However, a negative result sometimes can be difficult to interpret because its meaning partly depends on your family’s history of cancer, and whether a BRCA mutation has been identified in a blood relative. Further, scientists continue to discover new BRCA1 and BRCA2 mutations, and have not yet identified all potentially harmful ones. Therefore, it is possible that although you have a “negative” test result you might have a harmful BRCA1 or BRCA2 mutation, which has not been identified.
 
Counselling
 
Because of these uncertainties and the agonising choices women with suspected breast cancer face, health providers in most developed countries recommend counselling as part of breast cancer treatment pathways. In the video below Dr John Green, a medical oncologist knowledgeable about the influence of inherited BRCA gene mutations on treatment options underlines the importance of expert genetic counselling to help women navigate their therapeutic pathways. Counselling is performed by a health professional experienced in cancer genetics, and usually includes the psychological risks and benefits of genetic tests, a hereditary cancer risk assessment based on a person’s personal and family medical history; a description of the tests, their technical accuracy and appropriateness, medical implications of a positive or a negative test result, the possibility of uncertain or ambiguous test results, cancer risk-reducing treatment options, and the risk of passing on a mutation to children. Because people are more aware of the genetic mutations linked to breast cancer, the demand for genetic testing and counselling have increased, and in some instances it is challenging for genetic counsellors to keep pace with demand.
 
 
The context in which genetic tests are carried out

A 2017 study published in the Journal of Clinical Oncology suggests that genetic test results for breast cancer are not being clearly communicated to women, and this could cause them to opt for treatments that are more aggressive than they actually need. To reduce this possibility the Royal Marsden NHS Trust Hospital in London has introduced the Mainstreaming Cancer Genetics programme. Since 2014 the Marsden has employed genetic counseling and used laboratories with enhanced genetic testing capabilities. This reduces processing time and costs, helps to meet the increased demand for rapid, accurate and affordable BRCA testing, and helps women make critical decisions about their treatment options.
 
There were two main problems with the traditional system for gene testing. Firstly, gene testing was slow and expensive, and secondly the process for accessing gene testing was slow and complex,” says Nazneen Rahman, Professor and Head of Cancer Genetics at the UK’s Institute for Cancer Research in London. “We used new DNA sequencing technology to make a fast, accurate, affordable cancer gene test, which is now used across the UK. We then simplified test eligibility and brought testing to patients in the cancer clinic, rather than making them have another appointment, often in another hospital,” says Rahman.

The Marsden is now offering tests to three times more patients a year than before the program started. The new pathway is faster, with results arriving within 4 weeks, as opposed to the previous 20-week waiting period. According to Rahman, “Many other centres across the country and internationally are adopting our mainstream gene testing approach. This will help many women with cancer and will prevent cancers in their relatives.”

 
Takeaways

The history of cancer is punctuated with overzealous interventions, many of which have had to be modified once it has been demonstrated that they could cause more harm than good.

As advanced genetic testing becomes affordable and more accessible it is important that their results are interpreted with the help of genetic counsellors in a broader familial context in order to help women make painfully difficult decisions about their treatment.
 
Migration to next generation genetic testing technologies has many benefits, but it also introduces challenges, which arise from, the choice of platform and software, and the need for enhanced bio-informatics analysts, which are in scarce supply. An efficient, cost-effective accurate mutation detection strategy and a standardized, systematic approach to the reporting of BRCA test results are central for diagnostic laboratories wishing to provide a service during a time of increasing demand and downward pressure on costs.
 
view in full page
 
 
  • A recent study suggests that a drug combined with dietary and lifestyle changes can prevent those with pre-diabetes from progressing to full blown type-2 diabetes (T2DM)
  • T2DM kills millions and cost billions
  • 35% of adults in the UK, and 50% in the US now have prediabetes
  • The UK has launched the world’s first nationwide diabetes prevention program called Healthier You based on personal education and training
  • Prevalence rates of T2DM are still rising 
  • Research on the gut-brain axis suggests that drugs have a role to play in preventing T2DM
  • An optimum strategy might consist of appropriate drug therapy combined with appropriate education, which leverages ubiquitous 21st century communications infrastructures
  
A new therapeutic approach to pre-diabetes
 
Findings of an international clinical study published in The Lancet in 2017 suggest that 3.0mg of the drug liraglutide, may reduce diabetes risk by 80% in individuals with pre-diabetes and obesity, and thereby significantly contribute to the prevention of type-2 diabetes (T2DM). The study investigated whether 3.0mg of liraglutide would delay the onset of T2DM safely in people with pre-diabetes.
 
Liraglutide is the active solution in a drug marketed as Victoza, which obtained FDA approval in 2010.  Victoza is available in 6 mg/ml pre‑filled pens, and is used as an adjunct to diet and exercise to improve glycaemic control in adults with T2DM. Victoza is used also as an add-on to other diabetes medicines, when these, together with exercise and diet, are not providing adequate control of blood glucose.
  

Pre-diabetes

Pre-diabetes is a condition that develops when your blood sugar levels are at the very high end of the normal range, but not quite high enough for a diagnosis of T2DM.  Risk factors include age, weight and ethnicity. People of South Asian origin are up to six times more likely to develop pre-diabetes as a genetic susceptibility means they start to develop insulin resistance at a much lower Body Mass Index (BMI). With pre-diabetes your body begins to have trouble using the hormone insulin, which is necessary to transport glucose, which your body uses for energy, into your cells via the bloodstream. Pre-diabetes means that your body either does not make enough insulin or it does not use it well (insulin resistance). If you do not have enough insulin or if you are insulin resistant, you can build up too much glucose in your blood, leading to higher-than-normal blood glucose level and perhaps pre-diabetes. Blood glucose is measured using a test called HbA1c, which provides a picture of your blood sugar levels over the past two to three months. It counts the number of glucose molecules stuck to the red blood cells, which reveals how much sugar you have carried in your blood over the two to three month lifespan of the red blood cell. If your blood sugar is between 5.7 to 6.4%, this is called pre-diabetes (6.5 is officially diabetes). Dr Roni Sharvanu Saha, a consultant in acute medicine, diabetes and endocrinology at St George's Hospital, London describes pre-diabetes:
 


Prevalence and cost 
 
It is estimated that 35% of adults in the UK, and 50% in the US now have pre-diabetes. Around 5-10% of these will progress to "full-blown" T2DM in any given year. Because there are no obvious symptoms for pre-diabetes the overwhelming majority of people with the condition do not know they have it, and are not aware of the long-term risks to their health, which include T2DM and its complications: heart attack, stroke, kidney failure, blindness and lower limb amputation. Over the past decade, the prevalence of T2DM has increased by almost two-thirds, and is now one of the world’s most common long-term health conditions.
 
An estimated £14bn is spent each year on treating diabetes and its complications in the UK. Treating obesity-linked illnesses costs £10bn a year. The annual medical cost of treating diabetes in the US is about US$176bn, and the cost of diabetes in reduced productivity is some US$69bn each year.
 
The gut-brain axis

The study published in The Lancet was led by John Wilding, Professor of Medicine, University of Liverpool, and is a continuation of work he started in 1996 when part of a team at Hammersmith Hospital in London, which first showed that the hormone GLP-1, on which liraglutide is based, was involved in the control of food intake.
 
Over the past two decades scientists have increased their understanding of the two-way communications between the gut and the brain, not only through nerve connections between the organs, but also through biochemical signals, such as hormones that circulate in the body. Dr Sufyan Hussain, Specialist Registrar and Honorary Clinical Lecturer in Diabetes, Endocrinology and Metabolism at Imperial College London, describes the gut-brain axis.
 
 
Targeting gut-brain pathways

An increasing number of different gut microbial species are now postulated to regulate brain function in health and disease. The westernized diet, which is high in saturated fats, red meats, and carbohydrates, and low in fresh fruits and vegetables, whole grains, seafood, and poultry, is hypothesized to be the cause of high obesity levels in many countries. For example, 63% and 69% of adults in the UK and US respectively are either overweight or obese, and therefore at risk of T2DM. Experimental and epidemiological evidence suggest that the gut microbiota is responsible for significant immunologic, neuronal, and endocrine changes that lead to obesity. The gut–brain axis influences obesity, and researchers such as Wilding have targeted communication pathways between the nervous system and the digestive system in an attempt to treat metabolic disorders. 
 
Bariatric surgery and diabetes

A previous HealthPad Commentary describes how bariatric surgery is associated with gut-brain signals, which promote the remission of diabetes in patients. Many of the mechanisms that underlie how bariatric surgery produces metabolic benefits remain unclear, but researchers do know that such surgical procedures elevate levels of the hormones peptide YY (PYY), and glucagon-like peptide-1 (GLP-1) that help to reduce appetite and have effects on the central nervous system.
 
Liraglutide

Liraglutide is a GLP-1 receptor agonist, which interacts with the part of the brain that controls appetite and energy intake. The drug slows food leaving the stomach, helps prevent your liver from making too much sugar, and helps the pancreas to produce more insulin when your blood sugar levels are high. The most common side effects with liraglutide are nausea and diarrhoea.
 
The clinical study

The three-year study followed 2,254 adults with pre-diabetes at 191 research sites in 27 countries worldwide. Participants were randomly allocated to either liraglutide or a placebo delivered by injection under the skin once daily for 160 weeks. Participants in the study were also placed on a reduced calorie diet and advised to increase their physical activity. The study showed that three years of continuous treatment with once-daily 3.0mg of liraglutide, in combination with diet and increased physical activity, reduces the risk of developing T2DM by 80% and results in greater sustained weight loss compared to the placebo.

"On the basis of our findings, liraglutide 3.0mg can provide us with a new therapeutic approach for patients with obesity and pre-diabetes to substantially reduce their risk of developing type 2 diabetes and its related complications . . . . It is very exciting to see a laboratory observation translated into a medicine that has the potential to help so many people, even though it has taken over 20 years,” says Wilding.
 
World’s first nationwide diabetes prevention program

NHS England, Public Health England and Diabetes UK launched the world’s first nationwide diabetes prevention strategy, Healthier You, in 2016. It provides personal coaches to educate people at risk of T2DM in healthy eating and lifestyle, and personal trainers to provide bespoke physical exercise programs that are expected to help people lose weight. By 2020 Healthier You expects to be rolled out to the whole country with 100,000 referrals available each year after that.
 
Extrapolating from previous studies

International clinical studies have shown evidence that lifestyle interventions such as those used in Healthier You can prevent or delay the onset of T2DM. However, the validity of generalizing the results of previous prevention studies is uncertain. Interventions that work in some societies may not work in others, because social, economic, and cultural forces influence diet and exercise. The UK’s Public Accounts Committee has expressed doubts about the way Healthier You is setting about its task, and has warned that, "By itself, it will not be enough to stem the rising number of people with diabetes".
 
Failure of the diabetes establishment and the Public Accounts Committee

Healthier You is a slow, labor-intensive and expensive program, which is unlikely to have more than a relatively small impact.Let us explain. Assume that after 2020 Healthier You obtains its projected annual 100,000 referrals, and that they all successfully reduce their blood glucose levels with diet and exercise. Also assume that the prevalence of pre-diabetes in the UK does not increase, (which is not the case) then Healthier You will take more than 110 years to counsel the estimated 11.5m people in the UK with pre-diabetes: which is long after most people with pre-diabetes would have died from natural causes.
 
21st century communications

Successfully changing the diets and lifestyles of the 11.5m people in the UK believed to have pre-diabetes, and slowing their progression to T2DM will require 21st century technologies. Inexpensive and ubiquitous healthcare technologies used to educate and support diets and lifestyles abound. Increasingly people are demanding devices that track weight, blood pressure, daily exercise and diet. From apps to wearable’s, healthcare technology lets people feel in control of their health, while also providing health professionals with more patient data than ever before. With more than 100,000 healthcare apps, rapid growth in wearables, and 75% of the UK population now owning a smartphone, digital technology is well positioned to significantly improve healthcare education and management.
 
Takeaways

Has Healthier You missed the elephant in the room? Wilding’s study suggests that an exercise and diet program needs to be complemented with a sustained program of appropriate drugs if we are to reduce those with pre-diabetes from progressing to full blown T2DM. Further, simple arithmetic suggests that the education element of such a strategy about diet and lifestyle should leverage ubiquitous 21st century communications infrastructures if they are to be efficacious.
 
view in full page
 
  • Orthorexia nervosa is the term used to describe a growing serious 'health food eating disorder'
  • The number of people suffering from the condition is believed to be millions and increasing
  • Orthorexia often begins by cutting out certain food groups and only eating 'clean' foods in an attempt to become healthier
  • Sufferers become obsessed with ‘clean’ food, often feel superior to people with different eating habits, and indulge in excessive fitness routines
  • Experts warn that orthorexia can lead to malnutrition, social isolation and depression.  
     
Orthorexia: when eating healthily becomes unhealthy

Have you encountered someone who genuinely wants to live a healthier life by eating well, but then becomes so obsessed with “healthy” food that they become unwell and socially isolated?
 
If you have, then the person is likely to be suffering from orthorexia nervosa, an emerging dietary disorder in which an individual restricts intake to include only “healthy” foods, such as vegetables or organic foods, but in doing so develops an obsession with eating food believed to support “clean living”. Clean living is being mindful of the food's pathway between its origin and your plate, and eating food that is un- or minimally processed, refined, and handled, making them as close to their natural form as possible.
 
Having said this, it is important to mention that some restrictive diets can be healthy, and even necessary, for medical, ethical or religious reasons. Also, being mindful about what you consume is a positive way to live a healthy life: there is nothing wrong with eating healthily. However, orthorexia is different: becoming fixated on “clean” food can result in serious health problems.
 
Orthorexia is not anorexia

Unlike anorexics, orthorexics are preoccupied with the quality of food they consume rather than its quantity. The condition usually starts in a quest to be wholesome, when a person cuts out a food group, such as sugar, pulses, dairy products and processed food, but over time ends up with a diet so restrictive, that it contains only a limited number of ‘safe foods’, that the person becomes malnourished.
 

Orthorexia nervosa
 
Orthorexia nervosa describes a pathological obsession with “clean” nutrition, which is characterized by a restrictive diet, ritualized patterns of eating, rigid avoidance of foods believed to be unhealthy or impure, and excessive exercise. Although prompted by a desire to be healthy, orthorexia may lead to nutritional deficiencies, medical complications, and a poor quality of life.
 
Social isolation

Typically, orthorexics spend significant amounts of their time scrutinizing the source of food, and how it is processed and packaged to ensure that it is “clean”. The self-esteem of people with orthrexia becomes associated with their ability to stick to their diet of “clean food”, and they often feel guilty and angry with themselves if they stray from their strict list of acceptable foods.  Orthorexics may develop feelings of social superiority to others, and judge those who indulge in “unclean” foods. Their obsession with specific foods often stops them socializing with family and friends, as social events frequently involve drinking and eating “unhealthily”.  Also, excessive exercising plays an important role in relation to orthorexia. 
 
Because orthorexics are “addicted” to thinking they are doing the right thing, they tend not to question whether their diet and lifestyle might have a negative impact on their health. Sufferers often take their eating habits to dangerous levels, cutting out food groups and combining their strict diet with too much exercise. In the video below, Dr Seth Rankin, founder and CEO of the London Doctors Clinic suggests that, “denial is the hallmark of an obsession”, and that you cannot treat someone with an obsession unless they recognize that they have a problem.
 
 
 
First diagnosed sufferer

Steven Bratman, a physician who coined the term orthorexia nervosa in 1997, diagnosed himself with the condition after he became obsessive about clean eating. According to Bratman, “Eventually orthorexia reaches a point at which the orthorexic devotes much of his life to planning, purchasing, preparing and eating meals.” Bratman developed 10 questions based on his experience to show how people with the condition could be identified: see below. Bratman’s work has not been validated as indicative of a syndrome; and therefore the diagnostic criteria for orthorexia are still uncertain.
 

Bratman’s 10-point test for orthorexia

Do you spend more than 3 hours a day thinking about your diet?
Do you plan your meals several days ahead?
Is the nutritional value of your meal more important than the pleasure of eating it?
Has the quality of your life decreased as the quality of your diet has increased?
Have you become stricter with yourself lately?
Does your self-esteem get a boost from eating healthily?
Have you given up foods you used to enjoy in order to eat the 'right' foods?
Does your diet make it difficult for you to eat out, distancing you from family and friends?
Do you feel guilty when you stray from your diet?
Do you feel at peace with yourself and in total control when you eat healthily?
RESULTS
Yes to 4 or 5 of the above questions means it is time to relax more about food.
Yes to all of them means a full-blown obsession with eating healthy food.

 
Orthorexia is not officially recognized
 
One of the reasons you might not have heard of orthorexia is because it is not officially recognized as an eating disorder. It is not mentioned as a diagnosis in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which is published by the American Psychiatric Association, and popularly known as  “The Psychiatrist’s Bible”. Neither is the condition included in the World Health Organization's International Classification of Disease. Its lack of recognition leads primary care doctors to refer sufferers to nutritionists, which is a mistake because orthorexics require therapy that de-emphasizes food.
 
Prevalence difficult to determine

Without being officially recognized as a disease there has been no epidemiological studies on the condition. Notwithstanding, orthorexia is believed to affect millions and be on the increase. Some psychiatrists are beginning to study the condition and offer treatment to patients. In a recent survey of healthcare professionals, 66% reported having observed patients presenting with clinically significant orthorexia; and 66% suggested that the syndrome deserves more scientific attention.
 
The American National Association of Anorexia Nervosa and Associated Disorders suggests there are some 30m people in the US suffering from eating disorders. Instagram has 26m posts with the #eatclean hashtag. According to the UK’s National Osteoporosis Society, 20% of people under 25 are cutting out or reducing dairy from their diets. A 2016 National Diet and Nutrition Study undertaken by Public Health England found that the calcium intake of 1 in 6 women under 24 was “worryingly low”.
 
The ORTO-15 test and research beginnings

Orthorexia’s lack of formal status also means that there is a dearth of research on the condition, although published literature and research data have increased in the past few years. In 2005 a group of Italian scientists modified Bratman’s criteria for detecting orthorexia, and developed the ORTO-15 questionnaire, which identifies how far such criteria can be used for psychometric and specific diagnosis. Researchers enrolled 525 participants; 404 were used in the construction of the ORTO-15 test, which comprised 15 multiple-choice questions; and 121 people participated in the ORTO-test’s validation. A score below 40 implies the presence of an obsessive pathological behavior characterized by a strong preoccupation with “clean” eating. Findings from this validation study reported that the ORTO-15 test has an efficacy of 73.8%, a sensitivity of 55.6%, and a specificity of 75.8%.
 
At least four studies have used the ORTO-15 test to evaluate the prevalence of a preoccupation with “clean” food. A 2010 Turkish study published in the journal of Comprehensive Psychiatry found that 43.6% of medical students showed a preoccupation with healthy food. A large Hungarian study published in 2014 in the journal BMC Psychiatry used the ORTO-15 test on 810 predominantly female (89.4%) university students, and found that over 70% had orthorexia tendencies. American studies have reported a prevalence of orthorexic behaviours ranging from 69% to 82.8% among undergraduate students.
 
The first study to examine the prevalence of orthorexia nervosa in athletes was completed in 2012 and showed a high frequency of orthorexia across both male (30%) and female (28%) athletes who were largely professional athletes involved in a range of sports. In 2013 a meta study published in Eating and Weight Disorders reviewed 11 studies of orthorexia. Findings suggest that the average prevalence rate for orthorexia was 6.9% for the general population, 35% to 57.8% for high-risk groups such as dieticians, other healthcare professionals, and artists. Risk factors were suggested to be obsessive-compulsive features, eating-related disturbances, and higher socioeconomic status.
  
Takeaways
 
Orthorexia appears to be on the increase at a time when the vast and escalating healthy lifestyle-information industry is complemented by the rapid exchange of ideas via social media. This means that individuals are regularly bombarded with dietary and healthcare advice, which they can share instantly. Orthorexia seems yet another serious condition of affluent societies, which is growing in significance.
 
view in full page
 
  • 3m men in the US and 330,000 men in the UK are living with prostate cancer
  • The standard test used to diagnose prostate cancer is inaccurate
  • This inaccuracy causes anxiety in men and leads to unnecessary treatments
  • Standard therapies for prostate cancer can result in incontinence and impotence
  • Two new studies describe procedures that promise significant improvements in diagnosis and treatment
 
New developments in the management of prostate cancer
 
A vicious circle

There is general agreement on two issues concerning the management of prostate cancer: one, over-diagnosis and overtreatment rates are high; and two, there is a need to refine the standard prostate-specific antigen (PSA) diagnostic test.
 
The test does not provide information to allow doctors to determine which early-stage prostate tumors pose a risk of being aggressive and need treatment, and which should be left alone. Therefore, efforts to reduce the prevalence of prostate cancer by early detection using the PSA test can lead to over-diagnosis, which in turn can result in overtreatment, which in the case of prostate cancer, can result in incontinence and impotence.
 
Current official advice to UK GPs says: “The PSA test is available free to any well man aged 50 and over who requests it.” But, “GPs should not proactively raise the issue of PSA testing with asymptomatic men”. And, “GPs should use their clinical judgment to manage symptomatic men and those aged under 50 who are considered to have higher risk for prostate cancer”. In 2014 the National Institute for Health and Care Excellence (NICE) updated its guidelines and suggested that prostate cancer patients should avoid immediate treatment and keep their disease under “surveillance”.
 
A killer disease on the increase
 
Prostate cancer is increasing in significance worldwide. In many industrialized countries such as the US and the UK, it is one of the most common cancers and among the leading causes of cancer deaths. In developing countries it may be less common, but its incidence and mortality rates have been on the rise. In the US there are some 3m men living with the disease.  It is expected that in 2017, 161,000 new cases of prostate cancer will be diagnosed in the US, and 27,000 men will die from it. In the UK, there are some 330,000 men living with prostate cancer; each year around 47,000 men are diagnosed with the disease, and each year some 11,000 die from it, which equates to one every hour. Worldwide, there are an estimated 1.6m new cases of prostate cancer, and 366,000 prostate cancer deaths annually, making it the most commonly diagnosed cancer in men and the seventh leading cause of male cancer death.
 
The prostate and prostate cancer

The prostate is a small gland in men, which is located below the bladder and above the rectum. The urethra, which is the tube that carries urine and semen out of the body through the penis, goes through the centre of the prostate. In younger men the prostate is about the size of a walnut, but in older men it can be much larger. Symptoms of prostate cancer include persistent burning, difficult, frequent, uncontrolled or bloody urination in the absence of any infection. The average age of onset is 65 to 69. It is particularly prevalent in African-Caribbean men: affecting I in 4, and killing I in 12, which is double the rate for that of Caucasian men. The main risk factor is age: 80% of all men diagnosed with prostate cancer are over 65. Between 5% and 9% of cases occur in men with a family history of prostate, breast or ovarian cancer. Environmental factors are unclear, but rates of prostate cancer are lower in less urbanised societies, and rates rise when people move to a more westernised diet and lifestyle.
 
The prostate-specific antigen (PSA) test

In the 1980s a simple and cheap blood test was introduced to detect prostate cancer in its earliest, most curable, stage. In the video below Professor Karol Sikora, a cancer expert, describes the PSA test. Although used to detect prostate cancer, it is not a test for prostate cancer, and as a consequence, it has unresolved challenges. The most significant arises because the test is not accurate enough to either rule out or confirm the presence of cancer. Indeed, it is possible for PSA levels to be elevated when cancer is not present, and not to be elevated when it is present. More than 65% of men with elevated PSA levels do not have cancer. Excessive reliance on the test may lead to unnecessary interventions, while insufficient reliance may cause cancers to be missed.
 
 
Biopsies
 
A biopsy will often be recommended if a PSA test is high. It may also be recommended if a digital rectal examination (DRE) reveals a lump or some other abnormality in the prostate. The most commonly used biopsy for diagnosing prostate cancer is the trans-rectal ultrasound-guided prostate biopsy (TRUS-biopsy). This is a surgical procedure, in which tissue is removed from the prostate for microscopic examination. Each year, over 100,000 prostate biopsies are carried out in the UK and 1m in Europe.
 
75 to 80% of men who have TRUS-biopsies have no cancerous cells, and therefore did not need the biopsy. 20 to 25% do have cancerous cells, but a large percentage of these do not need any treatment because the cancers are slow growing.  A 2014 paper by the Harvard School of Public Health estimates that only 3% of men suspected of prostate cancer have an aggressive tumor requiring immediate intervention.
 
Further, doctors cannot tell from a biopsy whether cancerous cells are aggressive and need treatment, or whether they are developing slowly and do not require treatment. This creates confusion and anxiety among men, which prompts a percentage to opt for treatment even though the overwhelming majority do not need it. 25% of older men who elect to have treatment will become incontinent or impotent as a result, despite the fact that they did not need the treatment.
 
Active surveillance
 
In a significant proportion of men, prostate cancer cells grow slowly and never pose a serious risk to health and longevity. Evidence suggests that early treatment with either surgery or radiation does not reduce mortality rates, but leaves a significant percentage of men with urinary or erectile problems and other adverse effects. As a result, more men are willing to manage their condition by active surveillance, in which doctors monitor low-risk cancers closely and consider treatment only when the condition appears to make threatening moves toward growing and spreading. These men choose to live with prostate cancer until it advances, sometimes avoiding potentially life-altering side effects for several years. Active surveillance is a powerful solution to the problem of over-diagnosis and overtreatment.
 
New studies promise significantly improved management

Prostate cancer lags behind other cancers in diagnosis, treatment and research funding. But this is beginning to change. Over the past year, findings of two clinical studies promise significant improvements in the management of the condition.

The first, published in 2017 in the Lancet, describes a process, which uses MRI-guided biopsies to improve the accuracy of prostate cancer diagnosis, and spares those who do not have aggressive cancers from undergoing an unnecessary biopsy, so reducing the confusion and anxiety which prostate patients often experience.

The second, published in 2016 in the Lancet Oncology, describes findings of a laser-activated drug derived from bacteria found at the bottom of the sea that attacks and kills prostate cancer cells without either removing or destroying the prostate gland. This is significant because it avoids the potential adverse effects of surgery and radiotherapy, which can render patients incontinent and/or impotent. 

 
The multi-parametric MRI

The 2017 Lancet study used an advanced type of MRI scan, known as a multi-parametric MRI (MP-MRI), which in addition to recording the shape and size of the prostate, also assesses the blood flow through the gland. Led by Dr Hashim Ahmed of University College London, the study was comprised of more than 500 British men with suspected prostate cancer. Results suggest that using the MP-MRI to triage men would safely reduce the number needing a primary biopsy by about 27%, and substantially improve the detection of clinically significant cancers. If subsequent TRUS-biopsies were directed by MP-MRI findings, up to 18% more cases of clinically significant cancers might be detected compared with the standard pathway of TRUS-biopsy for all.
 
A paradigm shift in prostate cancer treatment

The second study compared the safety and effectiveness of a new therapy called vascular-targeted photodynamic therapy (VTP, also known as TOOKAD), with active surveillance in men with low-risk prostate cancer. It funded by STEBA Biotech, which holds the commercial licence for the therapy. Photodynamic therapy (PDT) is not new, and has been used to treat skin and other cancers where light can easily penetrate.  VTP therapy, however, is viewed as a paradigm shift in prostate cancer care. It involves injecting a light-sensitive drug (padeliporfin or WST11) into the bloodstream, and then activating it with a laser to destroy cancerous tissue.  The benefit of this approach is damage to healthy prostate tissue is minimised, reducing the risk of side effects.
 
Findings

The study was comprised of 413 men at low risk of prostate cancer, and carried out across 47 treatment sites in 10 European countries, most of which were performing VTP therapy for the first time. Only men classified with low-risk cancer were included in this study. Participants were randomly assigned either to VTP therapy or active surveillance. At the end of two years, of the 196 men who received the VTP treatment, about half showed no signs of the disease, compared with 13.5% of those given standard care. Only 6% of the VTP group later needed radical treatment, compared with 30% of active surveillance patients. VTP treatment also doubled the average time of cancer progression from 14 to 28 months. Findings suggest that 49% of patients treated with VTP therapy went into complete remission compared with 13.5% in the control group.

A third of the VTP group experienced side effects compared to only 10 of the active surveillance group. Notwithstanding, the study concluded that, “VTP therapy is a safe, effective treatment for low-risk, localised prostate cancer, which might allow more men to consider a tissue-preserving approach and defer or avoid radical therapy”. Patient monitoring will continue in order to ascertain whether the cancer stays away. Further studies should help to understand better which cancers VTP  treatment is most appropriate for so that men can make more informed treatment decisions.

Study enhanced by MRI scanning
 
The study was conducted with people at low risk of prostate cancer. Those at very low risk are better off with no treatment and no adverse-effects. Professor Mark Emberton of University College London, the lead author of the study, believes the therapy will be most useful in patients in the “grey zone”, between low and high risk. “The fact that the treatment was performed so successfully by non-specialist centres in various health systems is really remarkable”, says Emberton because the lack of complication suggests that the treatment protocol is safe, and relatively easy to scale.

At the beginning of the study MRI scans were not universally available, and Emberton believes MRI scanning as suggested by the Ahmed 2017 study will have a significant positive effect on prostate cancer treatment in the future. When carrying out biopsies without guidance from MRI scans researchers had to guess where in the prostate the cancer was; so biopsies were sub-optimal. “If they were to do the study now, with the help of MRI scans, they could hit the cancerous parts of the prostate rather than going in blind and the results would be much better,” says Emberton.

 
Takeaways
 
These two recent studies are potential “game changers”. They promise to significantly enhance the management of prostate cancer and substantially reduce the uncertainty and anxiety, as well as the risks of the life altering side effects of treatment, experienced by millions of men living with the disease.
 
view in full page
 
  • Each year cancer kills 8m people worldwide and cost billions
  • 40% of cancer deaths could be prevented by early detection
  • Nearly half of all cancer sufferers are diagnosed late when the tumors have already spread
  • Victims and doctors often miss early warning signs of cancer
  • Traditional tissue biopsies used to diagnose cancer are invasive, slow, costly, and often yield insufficient tissue
  • New blood tests are being devised that simultaneously detect cancer early and inform where the cancer is in the body
  • Such tests - liquid biopsies - are positioned to end the late diagnosis of cancer
  • But before liquid biopsies become common practice they need to overcome a number of significant challenges
  
World’s first blood tests that detect and locate cancer
 
Just as there is a global race among immunotherapists to enhance cancer treatment, so there is a parallel race among bioengineers to speed up and improve the detection of cancer. Such races are important because nearly half of all cancer sufferers are diagnosed late, when their tumors have already metastasized: 30% to 40% of cancer deaths could be prevented by early detection and treatment.
 
Here we describe advances in blood tests - “liquid biopsies” - which can simultaneously detect cancer early, and identify its tissue of origin. We also, describe the growing commercialization of the technology, and some significant hurdles it still has to be overcome.
 
A costly killer disease

Each year cancer kills more than 8m people worldwide, 0.6m in the US and nearly 0.17m in the UK. Survival rates for pancreatic, liver, lung, ovarian, stomach, uterine and oesophageal cancers are particularly low. A large proportion of people do not know they have cancer, and many primary care doctors fail to detect its early warning signs. According to The Journal of Clinical Oncology, a staggering 44% of some types of cancers are misdiagnosed. A significant proportion of people discover that they have cancer only after presenting a different condition at A&E. Each year, the total cost of cancer to the UK’s exchequer is nearly £20bn. In the US, national spending on cancer is expected to reach US$156bn by 2020. And as populations age so some cancer prevalence rates increase, despite substantial endeavours to reduce the burden of the disease.
  
The UK: a stereotypical case

The UK is indicative of what is happening elsewhere in the developed world with regard to cancer diagnosis and treatment. Epidemiological trends suggest that although progress is being made to fight the disease, much work is still required. Death rates for a number of individual cancer types have declined, but rates for a few cancers have increased.

Recently, the UK’s Department of Health invested £450m to improve diagnosis, including giving primary care doctors better access to tests such as CT and MRI scans. But each year there are still some 0.17m cancer deaths in the UK, and 1 in 4 British cancer patients are unlikely to live longer than 6 months after diagnosis because they and their doctors have missed early signs of the disease. For example, in the UK only 23% of lung cancer cases are diagnosed early, as are 32% of cases of non-Hodgkin lymphoma, and 44% of ovarian cancer.

Not only does late detection increase morbidity and mortality, it significantly increases treatment costs. According to the UK’s NHS National Intelligence Network, a case of ovarian cancer detected early costs an average of £5,000 to treat, whereas one detected late at stage three or four costs £15,000. Similarly, a colon cancer patient detected early typically costs £3,000, while one not identified until a later stage would cost some £13,000.

 
Traditional tissue biopsies

Currently, oncologists look to pathologists for assistance in tumor diagnosis. Indeed, oncologists cannot proceed with therapy without a tissue diagnosis, nor are they able to discuss prognosis with the patient. After detecting a tumor through a physical examination or imaging, doctors use traditional tissue biopsies to gather information on the attributes of a patient’s cancer.
 
These pinpoint a cancer’s mutations and malignancy, but solid tissue biopsies are not always straightforward. While some cancers are easily accessed, others are hidden deep inside the body or buried in critical organs. Beyond the physical challenge, sampling from such tumors can be dangerous to patients, and once achieved, they do not always inform on current tumor dynamics. Further, traditional solid tissue biopsies are costly and time consuming to perform; they can yield insufficient tissue to obtain a good understanding of the tumor, and they can be hampered by a patient’s comorbidities, and lack of compliance.

 
Two significant studies
 
Although solid tumor tissue is still the gold standard source for clinical molecular analyses, cancer-derived material circulating in the bloodstream has become an appealing alternative showing potential to overcome some of the challenges of solid tissue biopsies.

Findings of two significant studies of liquid biopsies published in 2017 promise a more effective and patient-friendly method for diagnosing cancer: one in the journal Genome Biology, and the other in the journal Nature Genetics. Both studies are on the cusp of developing the world’s first simple blood test, which can both detect early stage cancer, and identify where in the body the cancer is located.

.
The Genome Biology study
 
​The study, reported in Genome Biology, describes findings of a blood test, referred to as the CancerLocator, which has been developed by Jasmine Zhou, Professor of Biological and Computer Sciences and her team at the University of California, Los Angeles (UCLA). The  Locator detected early stage cancer in 80% of breast, lung and liver cases.
 
Zhou and her colleagues devised a computer program that uses genetic data to detect circulating tumor DNA (ctDNA) in blood samples. Once identified, the ctDNA is compared to a database of genetic information from hundreds of people to identify where the tumor is located.  Zhou’s team discovered that tumors, which arise in different parts of the body, have different signatures, which a computer can spot. “The technology is in its infancy and requires further validation, but the potential benefits to patients are huge  . . . . . Non-invasive diagnosis of cancer is important, as it allows the early detection of cancer, and the earlier the cancer is caught, the higher chance a patient has of beating the disease,” says Zhou.
 
The Nature Genetics study

Researchers led by Kun Zhang, Professor of Bioengineering at the University of California, San Diego (UCSD), are responsible for the study published in the journal Nature Genetics. Zhang developed a test that examined ctDNA in blood from cancer patients and, like Zhou, discovered that not only could it detect cancer early, but could also locate where the tumor is growing in the body. When a tumor starts to take over a part of the body, it competes with normal cells for nutrients and space, killing them off in the process. As normal cells die, they release their DNA into the bloodstream; and that DNA can identify the affected tissue.
 
There are many technical differences on how each approach works . . . The work by the UCLA group is a computer program that uses data published previously by other groups, and has reduced the cancer detection error from roughly 60% to 26.5%. In contrast, we developed a new theoretical framework, generated our own data from over 100 patients and healthy people, and our accuracy of locating cancer in an organ is around 90%,” says Zhang, but he adds, “Major medical challenges don’t get solved by one team working alone”.
 
Confluence and advances in computing and biology

The research endeavors of Professors Zhou and Zhang have been made possible by the confluence and advances in computing and molecular biology. Over the past 20 years, there has been a paradigm shift in biology, a substantial increase in computing power, huge advances in artificial intelligence (AI), and the costs of data storage have plummeted. It took 13 years, US$3bn, and help from 7 governments to produce the first map of the human genome, which was completed in 2003. Soon it will be possible to sequence an entire genome in less than an hour for US$100.
 
The end of traditional in vitro diagnostics

Liquid biopsies are a sequencing-based technology used to detect microscopic fragments of DNA in just a few drops of blood, and hold out the potential to diagnose cancers before the onset of symptoms. Roger Kornberg, Professor of Structural Biology at Stanford University, and 2006 Nobel Laureate for Chemistry for his work in understanding how DNA is converted into RNA, “which gives a voice to genetic information that, on its own, is silent,” describes how advances in molecular science are fueling the replacement of traditional in vitro diagnostics with virtually instantaneous, point-of-care diagnostics without resort to complex processes or elaborate and expensive infrastructure. Liquid biopsies, such as those developed by Zhou and Zhang, have the potential to provide clinicians with a rapid and cheap means to detect cancer early, thereby enabling immediate treatment closely tailored to each patient’s disease state.

 
 
FDA approval of liquid biopsy
 
In 2016, the US Food and Drug Administration (FDA) granted Swiss pharmaceutical and biotech firm Roche approval for a liquid biopsy, which can detect gene mutations in the most common type of lung cancer, and thereby predict whether certain types of drugs can help treat it. 

The clinical implementations of such a test are not widespread, and there has been no regulatory approval of liquid biopsies for diagnosing cancer generally. Notwithstanding, ctDNA is now being extensively studied, as it is a non-invasive “real-time” biomarker that can provide diagnostic and prognostic information before and during treatment; and at progression.
 

cfDNA and ctDNA

Cell-free DNA (cfDNA) is a broad term that describes DNA, which is freely circulating in the bloodstream, but does not necessarily originate from a tumor. Circulating tumor DNA (ctDNA) is fragmented DNA, which is derived directly from a tumor or from circulating tumor cells (CTCs).
 
Commercialization of the liquid biopsy race
 
Bill Gates, Jeff Bezos and leading venture capitalists have poured hundreds of millions into the goal of developing liquid biopsies. The US market alone is projected at US$29bn, according to a 2015 report from investment bank Piper Jaffray. Currently, there are about 40 companies in the US analyzing blood for fragments of DNA shed by dying cancer cells. Notwithstanding, only a few companies have successfully marketed liquid biopsies, and these are limited to identifying the best treatments for certain cancers, and to update treatments as the cancer mutates. So far, no one has been successful in diagnosing incipient cancer from a vial of blood drawn from a patient who looks and feels perfectly healthy.
 
Some US companies in the liquid biopsy race

At the 2016 meeting of the American Society of Clinical Oncology (ASCO), a Silicon Valley start-up, Guardant Health, which has raised some US$200m, presented findings from a large study involving over 15,000 participants, which demonstrated the accuracy of its liquid biopsy test, Guardant360, for patients with advanced solid tumors. The study found the same patterns of genomic changes in cfDNA reported by the Guardant360 test as those found in 398 patients with matching tissue samples between 94% and 100% of the time.

The 70-gene test is the first comprehensive, non-invasive genomic cancer-sequencing test to market, and according to the company, about 2,000 physicians worldwide have used it. Guardant expects to continue to develop its technology, and maintain a commercial lead in the cfDNA liquid biopsy space. The next step for Guardant is to go beyond sequencing, which matches patients to targeted oncology drugs to the early detection of cancer itself. 
 
Also in 2016 Gates and Bezos teamed up with San Diego's Illumina, which makes most of the DNA sequencing machines that pick appropriate treatments for cancer patients, to launch another liquid biopsy start-up called Grail. In 2017, Grail raised US$900m to help it develop blood-based diagnostics to enable routine, early detection of cancer. The company aims to refine and validate its liquid biopsy technology by running a number of large-scale clinical studies where it expects to sequence hundreds of thousands of patients. Another Californian-based biotech start-up, Freemome,  raised US$65m to validate its liquid biopsy technology for the early detection of cancer.
 
Takeaways

Despite findings of the two 2017 studies reported in the journals Genome Biology and Nature genetics, FDA approval of Roche’s liquid biopsy, massive increase in investment, and significant commercial biotech activity, there is a gap between reality and aspirations for liquid biopsies. To provide doctors with a reliable, point-of-care means to detect cancer early, liquid biopsies will have to overcome several significant challenges. The major one is assay sensitivity and specificity for analysis of ctDNA and cfDNA. To compete with the gold standard solid tissue biopsy, and to ensure that patients receive early diagnosis and appropriate treatment, a successful liquid biopsy assay will have to demonstrate a high positive predictive value. Concomitantly, good sensitivity and excellent specificity will be required to yield acceptable rates of false positives and false negatives. Notwithstanding, the race among bioengineers to develop a non-invasive “real-time” liquid biopsy to detect cancer early is gaining momentum.
 

view in full page
 
  • Competition is intensifying among scientists to develop and use gene editing and immunotherapy to defeat intractable diseases
  • Chinese scientists were the first to inject people with cells modified by the CRISPR–Cas9 gene-editing technique
  • Several studies have extracted a patient’s own immune cells, modified them using gene-editing techniques, and re-infused them into the patient to seek and destroy cancer cells
  • A new prêt à l'emploi gene editing treatment disables the gene that causes donor immune cells to attack their host
  • The technique harvests immune cells from a donor, modifies and multiplies them so that they may be used quickly, easily and cheaply on different patients
  • Commercial, technical, regulatory and ethical barriers to gene editing differ in different geographies 

Gene editing battles

Gene editing and immunotherapy are developing at a pace. They have been innovative and effective in the fight against melanoma, lung cancer, lymphomas and some leukaemias, and promise much more. Somatic gene therapy changes, fixes and replaces genes at the tissue or cellular levels to treat a patient, and the changes are not passed on to the patient’s offspring. Germ line gene therapy inserts genes into reproductive cells and embryos to correct genetic defects that could be passed on to future generations.  Although there are still many unanswered clinical, commercial and ethical questions surrounding gene therapy, its future is assured and will be shaped by unexpected new market entrants and competition between Chinese and Western scientists, which is gaining momentum.
  
14 February 2017

On the 14th February 2017 an influential US science advisory group formed by the National Academy of Sciences and the National Academy of Medicine gave support to the modification of human embryos to prevent “serious diseases and disabilities” in cases where there are no other “reasonable alternatives”. This is one step closer to making the once unthinkable heritable changes in the human genome. The Report, however, insisted that before humanity intervenes in its own evolution, there should be a wide-ranging public debate, since the technology is associated with a number of unresolved ethical challenges. The French oppose gene editing, the Dutch and the Swedes support it, and a recent Nature editorial suggested that the EU is, “habitually paralysed whenever genetic modification is discussed”. In the meantime, clinical studies, which involve gene-editing are advancing at a pace in China, while the rest of the world appears to be embroiled in intellectual property and ethical debates, and playing catch-up.
 
15 February 2017

On the 15th February 2017, after a long, high-profile, heated and costly intellectual property action, judges at the US Patent and Trademark Office ruled in favor of Professor Feng Zhang and the Broad Institute of MIT and Harvard, over patents issued to them associated with the ownership of the gene-editing technology CRISPR-Cas9: a cheap and easy-to-use, all-purpose gene-editing tool, with huge therapeutic and commercial potential.
 
The proceedings were brought by University College Berkeley who claimed that the CRISPR technology had been invented by Professor Jennifer Doudna of the University, and Professor Emmanuelle Charpentier, now at the Max Planck Institute for Infection Biology in Berlin, and described in a paper they published in the journal Science in 2012. Berkeley argued that after the 2012 publication, an “obvious” development of the technology was to edit eukaryotic cells, which Berkeley claimed is all that Zhang did, and therefore his patents are without merit.

The Broad Institute countered, suggesting that Zhang made a significant inventive leap in applying CRISPR knowledge to edit complex organisms such as human cells, that there was no overlap with the University of California’s research outcomes, and that the patents were therefore deserved. The judges agreed, and ruled that the 10 CRISPR-Cas9 patents awarded to Zhang and the Broad Institute are sufficiently different from patents applied for by Berkeley, so that they can stand. 
 
The scientific community

Interestingly, before the 15th February 2017 ruling, the scientific community had appeared to side with Berkeley. In 2015 Doudna, and Charpentier were awarded US$3m and US$0.5m respectively for the prestigious Breakthrough Prize in life sciences and the Gruber Genetics Prize. In 2017 they were awarded the Japan Prize of US$0.45m for, “extending the boundaries of life sciences”. Doudna and Charpentier have each founded companies to commercially exploit their discovery: respectively Intellia Therapeutic, and CRISPR Therapeutics.
 
16 February 2017

A day after the patent ruling, Doudna said: “The Broad Institute is happy that their patent didn’t get thrown out, but we are pleased that our patent based on earlier work can now proceed to be issued”. According to Doudna, her patents are applicable to all cells, whereas Zhang’s patents are much more narrowly indicated. “They (Zhang and the Broad Institute) will have patents on green tennis balls. We will get patents on all tennis balls,” says Doudna.
 
Gene biology

Gene therapy has evolved from the science of genetics, which is an understanding of how heredity works. According to scientists life begins in a cell that is the basic building block of all multicellular organisms, which are made up of trillions of cells, each performing a specific function. Pairs of chromosomes comprising a single molecule of DNA reside in a cell’s nucleus. These contain the blueprint of life: genes, which determine inherited characteristics. Each gene has millions of sequences organised into segments of the chromosome and DNA. These contain hereditary information, which determine an organism’s growth and characteristics, and genes produce proteins that are responsible for most of the body’s chemical functions and biological reactions.

Roger Kornberg, an American structural biologist who won the 2006 Nobel Prize in Chemistry "for his studies of the molecular basis of eukaryotic transcription", describes the Impact of human genome determination on pharmaceuticals:
 
 
China’s first
 
While American scientists were fighting over intellectual property associated with CRISPR-Cas9, and American national scientific and medical academies were making lukewarm pronouncements about gene editing, Chinese scientists  had edited the genomes of human embryos in an attempt to modify the gene responsible for β-thalassemia and HIV, and are planning further clinical studies. In October 2016, Nature reported that a team of scientists, led by oncologist Lu You, at Ghengdu’s Sichuan University in China established a world first by using CRISPR-Cas9 technology to genetically modify a human patient’s immune cells, and re-infused them into the patient with aggressive lung cancer, with the expectation that the edited cells would seek, attack and destroy the cancer. Lu is recruiting more lung cancer patients to treat in this way, and he is planning further clinical studies that use similar ex vivo CRISPR-Cas9 approaches to treat bladder, kidney and prostate cancers
 
The Parker Institute for Cancer Immunotherapy
 
Conscious of the Chinese scientists’ achievements, Carl June, Professor of Pathology and Laboratory Medicine at the University of Pennsylvania and director of the new Parker Institute for Cancer Immunotherapy, believes America has the scientific infrastructure and support to accelerate gene editing and immunotherapies. Gene editing was first used therapeutically in humans at the University of Pennsylvania in 2014, when scientists modified the CCR5 gene (a co-receptor for HIV entry) on T-cells, which were injected in patients with AIDS to tackle HIV replication. Twelve patients with chronic HIV infection received autologous cells carrying a modified CCR5 gene, and HIV DNA levels were decreased in most patients.
 
Medical science and the music industry

The Parker Institute was founded in 2016 with a US$250m donation from Sean Parker, founder of Napster, an online music site, and former chairman of Facebook. This represents the largest single contribution ever made to the field of immunotherapy. The Institute unites 6 American medical schools and cancer centres with the aim of accelerating cures for cancer through immunotherapy approaches. 

Parker, who is 37, believes that medical research could learn from the music industry, which has been transformed by music sharing services such as Spotify. According to Parker, more scientists sharing intellectual property might transform immunotherapy research. He also suggests that T-cells, which have had significant success as a treatment for leukaemia, are similar to computers because they can be re-programed to become more effective at fighting certain cancers. The studies proposed by June and colleagues focus on removing T-cells, from a patient’s blood, modifying them in a laboratory to express chemeric antigen receptors that will attack cancer cells, and then re-infusing them into the patient to destroy cancer. This approach, however, is expensive, and in very young children it is not always possible to extract enough immune cells for the technique to work.

 
Prêt à l'emploi therapy

Waseem Qasim, Professor of Cell & Gene Therapy at University College London and Consultant in Paediatric immunology at Great Ormond Street Hospital, has overcome some of the challenges raised by June and his research. In 2015 Qasim and his team successfully used a prêt à l'emploi gene editing technique on a very young leukaemia patient. The technique, developed by the Paris-based pharmaceutical company Cellectis, disables the gene that causes donor-immune cells to attack their host. This was a world-first to treat leukaemia with genetically engineered immune cells from another person. Today, the young leukaemia patient is in remission. A second child, treated similarly by Qasim in December 2015, also shows no signs of the leukaemia returning. The cases were reported in 2017 in the journal Science Translational Medicine.
 
Universal cells to treat anyone cost effectively

The principal attraction of the prêt à l'emploi gene editing technique is that it can be used to create batches of cells to treat anyone. Blood is collected from a donor, and then turned into “hundreds” of doses that can then be stored frozen. At a later point in time the modified cells can be taken out of storage, and easily re-infused into different patients to become exemplars of a new generation of “living drugs” that seek and destroy specific cancer cells. The cost to manufacture a batch of prêt à l'emploi cells is estimated to be about US$4,000 compared to some US$50,000 using the more conventional method of altering a patient’s cells and returning them to the same patient. Qasim’s clinical successes raise the possibility of relatively cheap cellular therapy using supplies of universal cells that could be dripped into patients' veins on a moment’s notice.
 
Takeaways
 
CRISPR-Cas9 provides a relatively cheap and easy-to-use means to get an all-purpose gene-editing technology into clinics throughout the world. Clinical studies using the technology have shown a lot of promise especially in blood cancers. These studies are accelerating, and prêt à l'emploi gene editing techniques as an immunotherapy suggest a new and efficacious therapeutic pathway. Notwithstanding the clinical successes, there remain significant clinical, commercial and ethical challenges, but expect these to be approached differently in different parts of the world. And expect these differences to impact on the outcome of the scientific race, which is gaining momentum.
 
view in full page
 
 
  • The convergence of MedTech and pharma can generate innovative combination devices that promise significant therapeutic and commercial benefits
  • Combination devices such as advanced drug delivery systems offer more precise, predictable and personalized healthcare
  • The global market for advanced drug delivery systems is US$196bn and growing
  • Biosensors play a role in convergence and innovative drug delivery systems
  • Roger Kornberg, Professor of Medicine at Stanford University and 2006 Nobel Prize winner for Chemistry describes the technological advances, which are shaping new medical therapies

    

The convergence of MedTech and pharma and the role of biosensors

MedTech and pharma companies are converging.
What role do biosensors play in such a convergence?
 
Traditionally, MedTech and big pharma have progressed along parallel paths. More recently, however, their paths have begun to converge in an attempt to gain a competitive edge in a radically changing healthcare landscape. Convergence leverages MedTech’s technical expertise and pharma’s medical and biological agents to develop combination devices. These are expected to significantly improve diagnosis, monitoring and treatment of 21st century chronic lifetime diseases, and thereby make a substantial contribution to an evolving healthcare ecosystem that demands enhanced patient outcomes, and effective cost-containment.
 

Conventional diagnostics & drug delivery

Conventional in vitro diagnostics for common diseases are costly, time-consuming, and require centralized laboratories, experienced personnel and bulky equipment. Standard processes include the collection and transportation of biological samples from the point of care to a centralized laboratory for processing by experienced personnel. After the results become available, which usually takes days, the laboratory notifies doctors, who in turn contact patients, and modify their treatments as required. Conventional modes of treatment have mainly consisted of simple, fast-acting pharmaceuticals dispensed orally or as injectables. Such limited means of drug delivery slows the progress of drug development since most drugs are formulated to accommodate the conventional oral or injection delivery routes. Concerns about the quantity and duration of a drug’s presence, and its potential toxic effect on proximal non-diseased tissue drives interest in alternative drug delivery systems and fuels the convergence of MedTech and pharma.



The end of in vitro diagnostics

Roger Kornberg, Professor of Medicine at Stanford University, reflects on the limitations of conventional in vitro diagnostics, and describes how technological advances facilitate rapid point-of-care diagnostics, which are easier and cheaper:

 
 
Converging interest
 
Illustrative of the MedTech-pharma convergence is Verily's (formerly Google Life Sciences) partnership with Novartis to develop smart contact lenses to correct presbyopia, (age-related farsightedness), and for monitoring diabetes by measuring glucose in tears. Otsuka’s, partnership with Proteus Digital Health is another example. This venture expects to develop an ingestible drug adherence device. Proteus already has a FDA-approved sensor, which measures medication adherence. Otsuka is embedding the Proteus’s sensor, which is the size of a sand particle, into its medication for severe mental illnesses in order to enhance drug adherence, which is a serious problem. 50% of prescribed medication in the US is not taken as directed, resulting in unnecessary escalation of conditions and therapies, higher costs to health systems, and a serious challenge for clinical studies.

Drivers of change

The principal drivers of MedTech-pharma convergence include scientific and technological advances, ageing populations, increased chronic lifestyle diseases, emerging-market expansion, and developments in therapies. All have played a role in changing healthcare demands and delivery landscapes. Responding to these changes, both MedTech and pharma have continued to emphasize growth, while attempting to enhance value for payers and patients. This has resulted in cost cutting, and a sharper focus on high-performing therapeutics. It has also fuelled MedTech-pharma convergence and the consequent development of combination devices. According to Deloitte’s 2016 Global Life Science Outlook, combination devices “will likely continue to rapidly increase in number and application”.

MedTech’s changing business model
 
Over the past two decades, MedTech has been challenged by tighter regulatory scrutiny, and continued pressure on healthcare budgets, but advantaged by technological progress, which it has embraced to create new business models. This has been rewarded by positive healthcare investment trends. Over a similar period, pharma has been challenged by the expiry of its patents, advances in molecular science, and changing demographics, but buoyed by increased healthcare spending trends, although the forces that increase health costs are being tempered by a demand for value.

As pharma has been increasingly challenged, so interest has increased in the potential of MedTech to address some of the more pressing healthcare demands in a radically changing healthcare ecosystem. Unlike pharma, MedTech has leveraged social, mobile, and cloud technologies to develop new business models and innovative devices for earlier diagnoses, faster and less invasive interventions, enhanced patient monitoring, and improved management of lifetime chronic conditions.
 
Such innovations are contributing to cheaper, faster, and more efficient patient care, and shifting MedTech’s strategic focus away from curative care, such as joint replacements, to improving the quality of life for patients with chronic long-term conditions. This re-focusing of its strategy has strengthened MedTech commercially, and is rapidly changing the way in which healthcare is delivered, the way health professionals treat patients, and the way patients’ experience healthcare.
 
Josh Shachar, founder of several successful US technology companies and author of a number of patents, describes the new healthcare ecosystem and some of the commercial opportunities it offers, which are predicated on the convergence of MedTech and pharma:
 
 
The decline of big pharma’s traditional business model
 
Pharma’s one-size-fits-all traditional business model, which has fuelled its commercial success over the past century, is based on broad population averages. This now is in decline as patents expire on major drugs, and product pipelines diminish. For example, over the past 30 years the expiry of pharma’s patents cost the industry some US$240bn.

Advances in genetics and molecular biology, which followed the complete sequencing of the human genome in 2003, revolutionized medicine and shifted its focus from inefficient one-size-fits-all drugs to personalized therapies that matched patients to drugs via diagnostic tests and biomarkers in order to improve outcomes, and reduce side effects. Already 40% of drugs in development are personalized medicines, and this is projected to increase to nearly 70% over the next five years.

Today, analysts transform individuals’ DNA information into practical data, which drives drug discovery and diagnostics, and tailors medicines to treat individual diseases. This personalized medicine aims to target the right therapy to the right patient at the right time, in order to improve outcomes and reduce costs, and is transforming how healthcare is delivered and diseases managed. 

 
Personalized medicine

Personalized medicine has significantly dented pharma’s one-size-fits-all strategies. In general, pharma has been slow to respond to external shocks, and slow to renew its internal processes of discovery and development. As a result, the majority of new pharma drugs only offer marginal benefits. Today, pharma finds itself trapped in a downward commercial spiral: its revenues have plummeted, it has shed thousands of jobs, it has a dearth of one-size-fits-all drugs, and its replacement drugs are difficult-to-find, and when they are, they are too expensive.

Illustrative of the advances in molecular science that helped to destroy pharma’s traditional commercial strategy is the work of Kornberg. Here he describes an aspect of his work that is related to how biological information encoded in the genome is accessed to inform the direction of all human activity and the construction of organisms for which Kornberg received the Nobel Prize in Chemistry 2006, and created the foundations of personalized medicine:

 

  
Advanced drug delivery systems
 
Over the past 20 years, as pharma has struggled commercially and MedTech has shifted its business model, drug delivery systems have advanced significantly. Evolving sensor technologies have played a role in facilitating some of these advances, and are positioned to play an increasingly important role in the future of advanced drug delivery. According to BCC Research, the global market for advanced drug delivery systems, which increase bioavailability, reduce side effects, and improve patient compliance, increased from US$134bn in 2008 to some US$196bn in 2014.
 
The growth drivers for innovative drug delivery systems include recent advances of biological drugs such as proteins and nucleic acids, which have broadened the scope of therapeutic targets for a number of diseases. There are however, challenges.

 

Proteins are important structural and functional biomolecules that are a major part of every cell in your body. There are two nucleic acids: DNA and RNA. DNA stores and transfers genetic information, while RNA delivers information from DNA to protein-builders in the cells.


For instance, RNA is inherently unstable, and potentially immunogenic, and therefore requires innovative, targeted delivery systems. Such systems have benefitted significantly from progress in biomedical engineering and sensor technologies, which have enhanced the value of discoveries of bioactive molecules and gene therapies, and contributed to a number of new, advanced and innovative combination drug delivery systems, which promise to be more efficacious than conventional ones. 
 
Biosensors
 
The use of biosensors in drug delivery system is not new. The insulin pump is one example. Introduced in its present form some 30 years ago, the insulin pump is a near-physiologic programmable method of insulin delivery that is flexible and lifestyle-friendly.

Biosensors are analytical tools, which convert biological responses into electrical signals. In healthcare, they provide analyses of chemical or physiological processes and transmit that physiologic data to an observer or to a monitoring device. Historically, data outputs generated from these devices were either analog in nature or aggregated in a fashion that was not conducive to secondary analysis. The latest biosensors are wearable and provide vital sign monitoring of patients, athletes, premature infants, children, psychiatric patients, people who need long-term care, elderly, and people in remote regions. 
 
Increased accuracy and speed
 
The success of biosensors is associated with their ability to achieve very high levels of precision in measuring disease specific biomarkers both in vitro and in vivo environments. They use a biological element, such as enzymes, antibodies, receptors, tissues and microorganisms capable of recognizing or signalling real time biochemical changes in different inflammatory diseases and tumors. A transducer is then used to convert the biochemical signal into a quantifiable signal that can be transmitted, detected and analysed, and thereby has the potential, among other things, for rapid, accurate diagnosis and disease management.
 
Recent technological advances have led to the development of biosensors capable of detecting the target molecule in very low quantities and are considered to have enhanced capacity for increased accuracy and speed of diagnosis, prognosis and disease management. Biosensors are robust, inexpensive, easy to use, and more importantly, they do not require any sample preparation since they are able to detect almost any biomarker  - protein, nucleic acid, small molecule, etc. - within a pool of other bimolecular substances. Recently, researchers have developed various innovative strategies to miniaturize biosensors so that they can be used as an active integral part of tissue engineering systems and implanted in vivo.

 
Market for biosensors
 
Over the past decade, the market in biosensors and bioinformatics has grown; driven by advances in artificial intelligence (AI), increased computer power, enhanced network connectivity, miniaturization, and large data storage capacity.

Today, biosensors represent a rapidly expanding field estimated to be growing at 60% per year, albeit from a low start. In addition to providing a critical analytical component for new drug delivery systems, biosensors are used for environmental and food analysis, and production monitoring. The estimated annual world analytical market is about US$12bn, of which 30% is in healthcare. There is a vast market expansion potential for biosensors because less than 0.1% of the analytical market is currently using them.

A significant impetus of this growth comes from the healthcare industry, where there is increasing demand for inexpensive and reliable sensors across many aspects of both primary and secondary healthcare. It is reasonable to assume that a major biosensor market will be where an immediate assay is required, and in the near-term patients will use biosensors to monitor and manage treatable lifetime conditions, such as diabetes cancer, and heart disease.

The integration of biosensors with drug delivery
 
The integration of biosensors with drug delivery systems supports improved disease management, and better patient compliance since all information in respect to a person’s medical condition may be monitored and maintained continuously. It also increases the potential for implantable pharmacies, which can operate as closed loop systems that facilitate continuous diagnosis, treatment and prognosis without vast data processing and specialist intervention. A number of diseases require continuous monitoring for effective management. For example, frequent measurement of blood flow changes could improve the ability of health care providers to diagnose and treat patients with vascular conditions, such as those associated with diabetes and high blood pressure. Further, physicochemical changes in the body can indicate the progression of a disease before it manifests itself, and early detection of illness and its progression can increase the efficacy of therapeutics.
 
Takeaways

Combination devices, which are triggered by the convergence of MedTech and pharma, offer substantial therapeutic and commercial opportunities. There is significant potential for biosensors in this convergence. The importance of biosensors is associated with their operational simplicity, higher sensitivity, ability to perform multiplex analysis, and capability to be integrated into different functions using the same chip. However, there remain non-trivial challenges to reconcile the demands of performance and yield to simplicity and affordability.
 
 
view in full page
 
  • Chinese scientists lead the world in editing genomes of human embryos in order to develop new therapies for intractable diseases
  • US and UK regulators have given permission to edit the genes of human embryos
  • CRISPR-Cas9 has become the most common gene editing platform, which acts like is a pair of molecular scissors
  • CRISPR technology has the potential to revolutionize medicine, but critics say it could create a two-tiered society with elite citizens, and an underclass and have called for a worldwide moratorium on gene editing
  • Roger Kornberg, professor of medicine at Stanford University and 2006 Nobel Prize winner for Chemistry explains the science, which underpins gene-editing technology
  
Gene editing positioned to revolutionise medicine
 
It is a world first for China.
 
In 2015, a group of Chinese scientists edited the genomes of human embryos in an attempt to modify the gene responsible for β-thalassemia, a potentially fatal blood disorder. Researchers, led by Junjiu Huang from Sun Yat-sen University in Guangzhou, published their findings in the journal Protein & Cell.
 
In April 2016, another team of Chinese scientists reported a second experiment, which used the same gene editing procedure to alter a gene associated with resistance to the HIV virus. The research, led by Yong Fan, from Guangzhou Medical University, was published in the Journal of Assisted Reproduction and Genetics. At least two other groups in China are pursuing gene-editing research in human embryos, and thousands of scientists throughout the world are increasingly using a gene-editing technique called CRISPR-Cas9.
 
 

CRISPR-Cas9

Almost all cells in any living organism contain DNA, a type of molecule, which is passed on from one generation to the next. The genome is the entire sequence of DNA or an organism. Gene editing is the deliberate alteration of a selected DNA sequence in a living cell. CRISPR-Cas9 is a cheap and powerful technology that makes it possible to precisely “cut and paste” DNA, and has become the most common tool to create genetically modified organisms. Using CRISPR-Cas9, scientists can target specific sections of DNA, delete them, and if necessary, insert new genetic sequences. In its most basic form, CRISPR-Cas9 consists of a small piece of RNA, a genetic molecule closely related to DNA, and an enzyme protein called Cas9. The CRISPR component is the programmable molecular machinery that aligns the gene-editing tool at exactly the correct position on the DNA molecule. Then Cas9, a bacterial enzyme, cuts through the strands of DNA like a pair of molecular scissors. Gene editing differs from gene therapy, which is the introduction of normal genes into cells in place of missing or defective ones in order to correct genetic disorders.
 
Ground-breaking discovery 

The ground-breaking discovery of how CRISPR-Cas9 could be used in genome editing was first described by Jennifer Doudna, Professor of Chemistry and Cell Biology at the University of California, Berkeley, and Emmanuelle Charpentier, a geneticist and microbiologist, now at the Max Plank Institute for Infections in Berlin, and published in the journal Science in 2012.

In 2011 Feng Zhang, a bioengineer at the Broad Institute, MIT and Harvard, learned about CRISPR and began to work adapting CRISPR for use in human cells. His findings were published in 2013, and demonstrated how CRISPR-Cas9 can be used to edit the human genome in living cells.  

Subsequently, there has been a battle, which is on-going, between the scientists and their respective institution over the actual discovery of CRISPR’s use in human embryos, and who is entitled to the technology’s patents.
 
Gene editing research gathers pace worldwide: a few western examples

In 2016 a US federal biosafety and ethics panel licensed scientists at the University of Pennsylvania’s new Parker Institute of Cancer Immunotherapy to undertake the first human study to endow T-cells with the ability to attack specific cancers. Patients in the study will become the first people in the world to be treated with T-cells that have been genetically modified.

T-cells are designed to fight disease, but puzzlingly they are almost useless at fighting cancer. Carl June, the Parker Institute’s director and his team of researchers, will alter three genes in the T-cells of 18 cancer patients, essentially transforming the cells into super fighters. The patients will then be re-infused with the cancer-fighting T-cells to see if they will seek and destroy cancerous tumors.

Also in 2016, the UK’s Human Fertilisation and Embryology Authority (HFEA), which regulates fertility clinics and research, granted permission to a team of scientists led by Kathy Niakan at the Francis Crick Institute in London to edit the genes of human IVF embryos in order to investigate the causes of miscarriage. Out of every 100 fertilized eggs, fewer than 50 reach the early blastocyst stage, 25 implant into the womb, and only 13 develop beyond three months.
 
Frederick Lander, a development biologist at the Karolinska Institute Stockholm, is also using gene editing in an endeavour to discover new ways to treat infertility and prevent miscarriages. Lander is the first researcher to modify the DNA of healthy human embryos in order to learn more about how the genes regulate early embryonic development. Lander, like other scientists using gene-editing techniques on human embryos, is meticulous in not allowing them to result in a live birth. Lander only studies the modified embryos for the first seven days of their growth, and he never lets them develop past 14 days. “The potential benefits could be enormous”, he says.
 
Gene editing cures in a single treatment

Doctors at IVF clinics can already test embryos for genetic diseases, and pick the healthiest ones to implant into women. An advantage of gene editing is that potentially it could be used to correct genetic faults in embryos instead of picking those that happen to be healthy. This is why the two Chinese research papers represent a significant turning point. The gene editing technology they used has the potential to revolutionize the whole fight against devastating diseases, and to do many other things besides. The main benefit of gene editing therapy is that it provides potential cures for intractable diseases with a single treatment, rather than multiple treatments with possible side-effects.
 

The promise of gene editing for fatal and debilitating diseases
 
Among other things, gene editing holds out promise for people with fatal or debilitating inherited diseases. There are over 4,000 known inherited single gene conditions, affecting about 1% of births worldwide. These include the following:- cystic fibrosis, which each year affects about 70,000 people worldwide, 30,000 in the US, and about 10,000 in the UK; Tay-Sachs disease, which results in spasticity and death in childhood. The BRCA1 and BRCA2 inherited genes predispose women with a significantly greater chance of developing breast or ovarian cancer. Sickle-cell anaemia, in which inheriting the sickle cell gene from both parents causes the red blood cells to spontaneously “sickle” during a stress crisis; heart disease, of which many types are passed on genetically; haemophilia, a bleeding disorder caused by the absence of genetic clotting agent and. Huntington disease, a genetic condition which slowly kills victims by affecting cognitive functions and neurological status. Further, genomics play a significant role in mortality from chronic conditions such as cancer, diabetes and heart disease.
 
A world first

Huang and his colleagues set out to see if they could replace a gene in a single-cell fertilized human embryo. In principle, all cells produced as the embryo develops would then have the replaced gene. The embryos used by Huang were obtained from fertility clinics, but had an extra set of chromosomes, which prevented them from resulting in a live birth, though they did undergo the first stages of development. The technique used by Huang’s team involved injecting embryos with the enzyme complex CRISPR-Cas9, which, as described above, acts like is a pair of molecular scissors that can be designed to find and remove a specific strand of DNA inside a cell, and then replace it with a new piece of genetic material.
 
The science underpinning gene editing

In the two videos below Roger Kornberg, professor of medicine at Stanford University and 2006 Nobel Prize winner for Chemistry for his work on “transcription”, the process by which DNA is converted into RNA, explains the science, which underpins gene-editing technology:
 
How biological information, encoded in the genome, is accessed for all human activity

 
 
Impact of human genome determination on pharmaceuticals
 
An immature technology
 
Huang’s team injected 86 embryos, and then waited 48 hours; enough time for the CRISPR-Cas9 system, and the molecules that replace the missing DNA to act, and for the embryos to grow to about eight cells each. Of the 71 embryos that survived, 54 were genetically tested. Only 28 were successfully spliced, and only a fraction of those contained the replacement genetic material.
 
Therapy to cure HIV
 
Fan, the Chinese scientist who used CRISPR in an endeavor to discover a therapy for HIV/Aids, collected 213 fertilized human eggs, donated by 87 patients, which like embryos used by Huang, were unsuitable for implantation, as part of in vitro fertility therapy. Fan used CRISPR–Cas9 to introduce into some of the embryos a mutation that cripples an immune-cell gene called CCR5. Some humans who naturally carry this mutation are resistant to HIV, because the mutation alters the CCR5 protein in a way that prevents the virus from entering the T-cells it tries to infect. Fan’s analysis showed that only 4 of the 26 human embryos targeted were successfully modified.
 
Deleting and altering genes not targeted
 
In 2012, soon after scientists reported that CRISPR could edit DNA, experts raised concerns about “off-target effects,” where CRISPR inadvertently deletes or alters genes not targeted by the scientists. This can happen because one molecule in CRISPR acts like a bloodhound, and sniffs around the genome until it finds a match to its own specific sequence. Unfortunately, the human genome has billions of potential matches, which raises the possibility that the procedure might result in more than one match. 
 
Huang is considering ways to decrease the number of “off-target” mutations by tweaking the enzymes to guide them more precisely to a desired spot, introducing the enzymes in a different format in order to try to regulate their lifespans, allowing enzymes to be shut down before mutations accumulate; and varying the concentrations of the introduced enzymes and repair molecules. He is also, considering using other gene-editing techniques, such as LATENT.

 
The slippery slope to eugenics

Despite the potential therapeutic benefits from gene editing, critics suggest that genetic changes to embryos, known as germline modifications, are the start of a “slippery slope” that could eventually lead to the creation of a two-tiered society, with elite citizens, genetically engineered to be smarter, healthier and to live longer, and an underclass of biologically run-of-the-mill humans.
 
Some people believe that the work of Huang, Fan and others crosses a significant ethical line: because germline modifications are heritable, they therefore could have an unpredictable effect on future generations. Few people would argue against using CRISPR to treat terminal cancer patients, but what about treating chronic diseases or disabilities? If cystic fibrosis can be corrected with CRISPR, should obesity, which is associated with many life-threatening conditions? Who decides where the line is drawn?
 
40 countries have banned CRISPR in human embryos. Two prominent journals, Nature and Science, rejected Huang’s 2012 research paper on ethical grounds, and subsequently, Nature published a note calling for a global moratorium on the genetic modification of human embryos, suggesting that there are “grave concerns” about the ethics and safety of the technology.
 
A 2016 report from the Nuffield Council on Bioethics suggests that because of the steep rise in genetic technology, and the general availability of cheap, simple-to-use gene-editing kits, which make it relatively straightforward for enthusiasts outside laboratories to perform experiments, there needs to be internationally agreed ethical codes before the technology develops further.
 
Recently, the novelist Kazuo Ishiguro, among others, joined the debate, arguing that social changes unleashed by gene editing technologies could undermine core human values. “We’re coming close to the point where we can, objectively in some sense, create people who are superior to others,” says Ishiguro.
 
Takeaways

CRISPR has been described as the “Model T of genetics”.  Just as the Model T was the first motor vehicle to be successfully mass-produced, and made driving cheap and accessible to the masses, so CRISPR has made a complex process to alter any piece of DNA in any species easy, cheap and reliable, and accessible to scientists throughout the world. Although CRISPR still faces some technical challenges, and notwithstanding that it has ignited significant protests on ethical grounds, there is now a global race to push the boundaries of its capabilities well beyond its present limits.
 
view in full page
 
  • Influenza, or flu, outbreaks are recurrent and every year pose  a significant risk to global health
  • Influenza affects millions: each year 3m to 5m cases of severe disease and 500,000 deaths
  • Pandemics occur about three times a century
  • The 1918 flu pandemic killed 21m . . . Total deaths in World War I was 17m
  • Effective treatment of patients with respiratory illness depend on accurate and timely diagnosis
  • Early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy
  • Rapid Influenza Diagnostic Tests (RIDTs) are useful in determining whether outbreaks of respiratory disease might be due to influenza
  • RIDTs vary in their sensitivity, specificity, complexity, and time to produce results
  • There is a pressing need for faster, cheaper, and easier-to-use flu tests with higher levels of sensitivity and specificity than those currently available
  • The large, fast-growing, global and under-served RIDT market drives a host of initiatives
  • Various development challenges pose significant threats
 
 
The critical importance of new rapid influenza diagnostic tests
 
What challenges face developers of cheap, easy-to-use, rapid and accurate diagnostic tests for influenza, or flu, which improve on tests currently available?

 
Influenza

Influenza is a highly contagious respiratory illness caused by a virus, and occurs in distinct outbreaks of varying extent every year. Its epidemiologic pattern reflects the changing nature of the antigenic properties of influenza viruses. The viruses subsequent spread depends upon multiple factors, including transmissibility and the susceptibility of the population. Influenza A viruses, in particular, have a remarkable ability to undergo periodic changes in the antigenic characteristics of their envelope glycoproteins; the hemagglutinin and the neuraminidase. Anyone can get influenza. It is usually spread by the coughs and sneezes of an infected person. You can also catch flu by touching an infected person (e.g. shaking hands). Adults are contagious one to two days before getting symptoms and up to seven days after becoming ill, which means that you can spread the influenza virus before you even know you are infected. Influenza presents as a sudden onset of high fever, myalgia, headache and severe malaise, cough (usually dry), sore throat, and runny nose. There are several treatment options, which aim to ease symptoms until the infection goes, and aims to prevent complications. Most healthy people recover within one to two weeks without requiring any medical treatment. However, influenza can cause severe illness or death especially in people at high risk such as the very young, the elderly, and people suffering from medical conditions such as lung diseases, diabetes, cancer, kidney or heart problems.
  
Costly killer

Influenza is a cruel, costly killer with a history of pandemics. It causes millions of upper respiratory tract infections every year as it spreads around the world in seasonal epidemics, and poses on-going risks to health. The most vulnerable are the young, the old and those with chronic medical conditions such as heart disease, respiratory problems and diabetes. Each year, on average 5% to 20% of populations in wealthy countries get influenza. In the US it causes more than 200,000 hospitalizations and 36,000 deaths annually, and each year costs the American economy between US$71 to US$167bn.
 
History of pandemics

The 1918-19 “Spanish Flu” pandemic caused 21m deaths, and was one of three 20th century influenza pandemics. At least four pandemics occurred in the 19th century, and the first pandemic of the 21st century was the 2009 “Swine Flu”. Its virulence and global human impact was less deadly than originally feared, but it still resulted in 18,449 laboratory confirmed deaths. If you account for people who died as a result of complications precipitated by the Swine Flu, the actual death toll is significantly higher. Mindful of the potential accelerated spread of a pandemic subtype of the influenza virus, the World Health Organization (WHO), and national governments continuously monitor influenza viruses. Assessment of pathogenicity and virulence is the key to taking appropriate healthcare actions in the event of an outbreak.

However, without widespread access to improved diagnostic tests, each year millions will not receive timely anti-viral medication, tens of thousands of influenza sufferers will develop complications, and thousands will die unnecessarily, as the growing interconnections and complexity of the world present an increasing challenge to influenza prevention and control.
 

The influenza viruses

Influenza is a single-stranded, helically shaped, RNA virus of the orthomyxovirus family. Influenza viruses are divided into two groups: A and B. Influenza A has two subtypes which are important for humans: A(H3N2) and A(H1N1). The former is currently associated with most deaths. Influenza viruses are defined by two different protein components, known as antigens, on the surface of the virus. They are haemagglutinin (H) and neuraminidase (N) components. Influenza viruses circulate in all parts of the world, and mutate at a low level, referred to as "genetic drift", which allows influenza to continuously evolve and escape from the pressures of population immunity. This means that each individual is always susceptible to infections with new strains of the virus. "Genetic shift" occurs when a strain of influenza A virus completely replaces one or more of its gene segments with the homologous segments from another influenza A strain, a process known as reassortment. If the new segments are from an animal influenza virus to which humans have had no exposure and no immunity, pandemics may ensue.
 
Gold standard diagnosis rarely used

The gold standard method for the detection of influenza viruses is rarely performed, as patients with suspected influenza are most likely to be seen by a primary care doctor with limited resources, and the gold standard test requires sophisticated laboratory infrastructure, and takes at least 48 hours. Even the faster Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test, which is a relatively new type of molecular assay that uses isothermal amplification of viral cells, has a turnaround time of four to six hours. It is also expensive, and therefore not commonly used.

The slowness and expense of traditional influenza tests led to the development of an array of commercially available Rapid Influenza Diagnostic Tests (RIDTs), which screen for influenza viruses, and provide results within as little as 15 minutes after sample collection and processing. Such tests are largely immunoassays that can identify the presence of influenza A and B viral nucleoprotein antigens in respiratory specimens and display the results in a qualitative way (positive or negative). About 10 such tests have FDA approval and are available in the US. About 20 have been determined suitable for the European market. All are growing in their usage. However, the RIDTs vary in their sensitivity, specificity, complexity, and in the time needed to produce results.
  
Tests rule in Influenza but do not rule it out
 
According to the Centers for Disease Control and Prevention (CDC) the commercially available RIDTs in America have a sensitivity ranging from 50% to 70%. This means that in up to 50% of influenza cases, test results will still be negative. A study showed that tests for the N1H1 virus, a subtype of influenza A that was the most common cause of the Swine Flu in 2009, and is associated with the 1918 Spanish Flu pandemic, have a sensitivity ranging from 32% to 50% depending on the brand of test. A 2012 meta-analysis of the accuracy of RIDTs reported an average sensitivity for detecting influenza in adults of only 54%. Sensitivity in children is somewhat higher since they tend to shed a greater quantity of virus. Thus some 30% to 50% of flu samples that would register positive by the gold standard viral culture test may give a false negative when using a RIDT, and some may indicate a false positive when a person is not infected with influenza. Thus, RIDTs that are currently available allow Influenza to be ruled in but not ruled out. More sensitive tests are needed.
 
New flu tests

There are a number of innovative nano-scale molecular diagnostic influenza tests in development, which are expected to deliver more accurate validations than existing antigen-based molecular tests. The new tests use a platform, comprised of an extremely thin layer of material, which detects the presence of influenza proteins in saliva or blood. This is attached to an electronic chip, which transforms the platform into a sensor. This is an essential part of the measuring device as it converts the input signal to the quantity suitable for measurement and interpretation. The presence of influenza proteins in saliva or blood triggers an electrical signal in the chip, which is then communicated to a mobile phone.
 
Here Roger Kornberg, Professor of Medicine at Stanford University and 2006 Nobel Laureate for Chemistry describes how advances in molecular science are enabling the replacement of traditional in vitro diagnostics with rapid, virtually instantaneous point-of-care diagnostics without resort to complex processes or elaborate infrastructure.  Antiviral drugs for influenza are available in some countries and may reduce severe complications and deaths. Ideally they need to be administered early (within 48 hours of onset of symptoms) in the disease.  An almost instantaneous point-of-care test will enable better access to appropriate treatment particularly in primary care:

 
 
Challenges

Notwithstanding all the recent scientific advances, new and innovative influenza detection tests will need to overcome significant challenges to outperform current RIDTs. In addition to the usual challenges associated with sensitivity and specificity, new developers have to be aware of recent changes in immunochromatographic antigen detection testing for influenza viruses, and the rapid development of commercially available nucleic acid amplification tests. Also, there are the usual development challenges associated with miniaturization, fabrication, scaling, marketing, and regulation. Effective from 13 February 2017, the FDA reclassified antigen based rapid influenza detection tests from class I into class II devices. Class II devices are higher risk than Class I, and require greater regulatory controls to provide reasonable assurance of the device’s safety and effectiveness. This was provoked by the potential for the devices to fail to detect newer versions of the influenza virus. For instance, a novel variant of influenza A,H7N9, has emerged in Asia, and H5N1 is also re-emergent.
 
Another challenge, especially for start-ups with limited resources, is the fluctuating nature of the influenza virus itself. A bad year for patients, when influenza causes millions of people to become ill, is a good year for manufacturers of RIDTs. Conversely, a good year for patients, when influenza affects a lower percentage of the population, is a bad year for manufacturers who suffer from unsold inventory, and reduced revenues. Thus, the vagaries of the flu virus not only have the potential to kill millions of people, they also pose a significant threat to start-ups dedicated to developing RIDTs.
 
Takeaways

Despite all the challenges, there is a significant commercial opportunity in the current under-served global RIDT market for improved tests. Each year, in the US, more than 1bn people visit primary care doctors, and in the UK, the NHS, deals with over 1m patients every 36 hours. The global in vitro diagnostics market was valued at US$60bn in 2016. Between 2016 and 2021, the market is expected to grow at a CAGR of 5.5% to reach US$79bn by 2021. Over the same period, the global point-of-care diagnostics sub-market is expected to grow at a CAGR of 10% to reach US$37bn by 2021. Large corporates, small start-ups, and university research laboratories have spotted the opportunity, and started developing new and innovative RIDTs. Given that each-year influenza causes widespread morbidity as well as mortality, it should be a matter of priority to support all efforts to develop swift and reliable RIDTs. A significant step forward would be a RIDT with greater sensitivity and usability such that the test could be administered and a result given within a 10-minute primary care consultation.
 
view in full page