Featured Post

Hacking Health in Hamilton Ontario - Let's hear that pitch!

What compelled me to register for a weekend Health Hackathon? Anyway, I could soon be up to my ears in it. A pubmed search on Health Hack...

Showing posts with label sensors. Show all posts
Showing posts with label sensors. Show all posts

Saturday, June 30, 2018

Revolutionizing healthcare - reposted from Peter Diamandis

We are on the brink of a revolution in healthcare.
AI is making the drug discovery process >100X faster and cheaper, and 90% more likely to succeed in clinical trials.
Mobile health is predicted to become a $102 billion market by 2022, putting a virtual doctor, on-demand, in your back pocket.
And the cost of sequencing your genome (3.2 billion base pairs) has decreased 100,000-fold over 13 years, a staggering speed that is 3 times faster than Moore’s Law. 
Cost Per Genome
 Source: Social Capital via Medium
But humans have barely scratched the surface.
As exponential technologies explode onto the scene all at once, we are about to witness the unprecedented rise of personalizedubiquitous and intelligent healthcare.
In this blog, we’ll discuss how converging exponential technologies are enabling:
  1. Personalized medicine
  2. Delocalized (“everywhere”) care
  3. The new era of intelligent prevention
We are truly living in an era when anything is possible.

Personalized Medicine 

Currently, research focuses on one-size-fits-all solutions. Clinical trials aim to discover therapies for the general population — and can only introduce them after years of expensive initial research, lab testing, human testing clearance, multiple phases of patient testing and maybe ultimate approval.
But what if all treatments were targeted at your individual genome, from lab testing to ready product, and at a lower cost?
Using their powerful deep learning systems, NVIDIA aims to tailor treatments to an individual’s genomic makeup.
Others, like a team at the University of Toronto, are building genetic interpretation engines to pinpoint cancer-causing genetic mutations in individual patients.
Similarly, researchers at UNC’s Lineberger Comprehensive Cancer Center use cognitive computing to identify individually relevant therapeutic options based on one’s genetic profile.
But this is only the beginning. Take Harvard Wyss Institute’s organs-on-chips.
Containing microfluidic channels with living human cells and mechanical mimicry of an organ’s microenvironment, the Wyss Institute’s organs-on-chips can serve as micro artificial hearts, lungs, intestines and kidneys, among many other organs.
The biotech company Emulate has raised millions for use of these organs-on-chips to replace traditional animal testing and deliver personalized medicine. 
Organs on a Chip
Emulate uses organs-on-chips to accurately test drugs on individual, human organs. Source: Emulate
In the future, these could be your cells on a chip, tested with treatment after treatment until the right one sticks, tailored exactly to your genetic makeup.
But it doesn’t stop at genetically personalized treatments. Welcome to personalized diets. 
Each of us has about 40 trillion microorganisms that occupy our gut, and each microbiome — like our DNA — is distinct. Through a simple home kit, Viome applies machine learning to analyze your microbiome, recommending optimal, personalized nutritional recommendations for your gut. 
Bowhead Health tackles yet another approach to personalized medicine. With either saliva or a blood-prick test, Bowhead’s small home device reads this biometric data in real time and transmits the reading to doctors. As soon as key deficiencies are identified, your in-home Bowhead device dispenses a customized, vitamin-based pill, all your own.

Delocalized Care

Kaiser Permanente’s chairman and chief executive George Halvorson foresees plummeting healthcare costs as care migrates farther from hospitals and doctors’ offices and into any and every setting via the Internet.
The harbingers of “everywhere care” are so abundant, they deserve a blog of their own.
Here are the highlights:
(1) mHealth (or Mobile Health) has already grown beyond a $23 billion market, and by some estimates will surpass $102 billion by 2022.
Step aside, WebMD.
AI-powered medical chatbots are flooding the market. Diagnostic apps can identify anything from a rash to diabetic retinopathy. And with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission and remote diagnosis by medical professionals.
Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app. Babylon now aims to build up its AI for advanced diagnostics and even prescription. 
Others, like Woebot, take on mental health, using Cognitive Behavioral Therapy in communications over Facebook Messenger with patients suffering from depression.
New diagnostics and screening apps are also beginning to empower the next generation of patient-doctors.
In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice. 
With mHealth platforms like ClickMedix, which connects remotely located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?
(2) AR/VR will revolutionize medical training, making it immersive and ubiquitously accessible. 
It’s no wonder the healthcare industry suffers from a shortage of doctors. Medical training is not only expensive, but its conventional methods also severely limit scalability.
With virtual and augmented reality, however, gone are the days of peering over a surgeon’s shoulder to learn from another’s experience.
Why not perform surgery on an annotated, virtual 3D body from anywhere in the world, for minimal cost, and do no harm?
Companies like Echopixel and 3D4Medical are achieving this delocalization and hands-on training with remarkable style, translating 2D scans and anatomy into live AR and VR patients.
Lung - 3D4Medical 2
3D4Medical translates 2D anatomical and brain scan diagrams into 3D AR realities.   Source: 3D4Medical
(3) AI-aided IoMT (Internet of Medical Things) may be one of the most exciting frontiers in healthcare.
Welcome to the age of intravenous nanomachines, electronic implants and pill-embedded sensors.
While wearables have long been able to track and transmit our steps, heart rate and various other health factors, smart nanobots and ingestible sensors will soon be able to monitor countless health parameters and even help diagnose disease.
But it doesn’t stop there. As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.
Some companies, however, are working on high-precision sensors that need not enter the body. Apple, for instance, is reportedly building sensors that can noninvasively monitor blood sugar levels in real-time for diabetic treatment.
In last year’s Qualcomm Tricorder XPRIZE, we were proud to grant $2.5 million in prize money to the winning team, Final Frontier Medical Devices. Using a group of noninvasive sensors that collect data on vital signs, body chemistry and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments. Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.
IoT-connected sensors are also entering the world of prescription drugs. Just this winter, the first sensor-embedded pill — Abilify MyCite — was approved by the FDA.
Digital pills such as Abilify will now be able to communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.
But nanobots and IoT-connected sensors get a lot more exciting when they converge with 3D printers, AI supercomputers and the power of big data.

Intelligent Prevention

Take a minute to imagine this unprecedented convergence:
Nanobot sensors flowing through your bloodstream monitor different health parameters, measuring nutrient levels and keeping an eye on your cholesterol.
As data flows in, these connected sensors transmit your health data in real-time to a remote AI-powered supercomputer geared with all your genomics, microbiome and medical history data — access secured via blockchain, of course.
As abnormalities are detected, this AI-driven doctor sifts through tomes of data to identify an optimal, personalized treatment based on your genetic profile and real-time health data. Once vetted and approved, a prescription arrives at the dashboard of your in-home medical 3D printer.
With customized dosage, your 3D printer separates the drug’s active ingredients with micro-barriers and embeds a printed sensor to monitor variations in drug release and effectiveness.
Feedback is instantaneously communicated through IoMT, and AI again improves its personalized medicine for future treatment.
You might think that AI medical powerhouses and autonomous sensors leave human doctors out of luck. But many digital healthcare startups are in fact redefining and elevating the role of our doctors.
Take Forward, for example. A digitized doctor’s office geared with AI-driven diagnostics and personalized medicine, Forward is finding a way to liberate its doctors from many of the tedious necessities that so often constrain their ability to engage with patients. 
As medical AI enterprises like Microsoft’s Healthcare NExT and IBM Watson Health bring incredible power to diagnostics, drug discovery and genetic therapy development, doctors may be freed to take on consultative roles — educating patients, performing many more remote surgeries with the help of robotics, and aiding in preventive care.

Final Thoughts

Nowhere is convergence bringing greater breakthroughs than in healthcare.
As transformative technologies like CRISPR-Cas9 unlock our genetic potential, quantum computing massively ups the speed of AI-powered drug discovery, 3D printing places the power of preventive medicine in the hands of consumers, and next-generation implants enhance our minds, we are truly living in an era when anything is possible.

Join Me 

(1) A360 Executive Mastermind: This is the sort of conversation I explore at my Executive Mastermind group called Abundance 360. The program is highly selective, for 360 abundance and exponentially minded CEOs (running $10M to $10B companies). If you’d like to be considered, apply here.
Share this with your friends, especially if they are interested in any of the areas outlined above.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.0
Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Sunday, November 13, 2016

Musing on the Interaxon Muse Meditation Headband

"For this calibration, find a comfortable position and take a deep breath".

The computer brain interface world is getting interesting. The first time I heard about these types of MUSE brainwave sensing devices was an experiment where they trained people to move a cursor on a computer screen using their brain waves and a EEG headband. Maybe it was the MUSE - not sure. The next thing they did was have those same people change the colour of the floodlights on Niagara Falls and the CN Tower using their entrained brainwaves.

 I have seen more than several research projects now that have involved the Interaxon Muse headband - a device that self-directs users into a calm state of meditation by reading their brainwaves through an EEG headband and translating the data into a meditation tracking app. It may be just the start before EEG caps and gels and wire attachments are a thing of the past.

The McMaster university library recently started loaning out this device so instead of buying one (about $400) I have borrowed one for a week. Mind you, I have 35 years of meditation experience in a variety of schools and techniques and am not expecting a device like this to teach me anything. But after taking an 8 week online mindfulness course - just videos and online instructions - I believe that meditation can be taught through technology.

After downloading the app and fumbling around trying to fit it on my head - should have looked at the visuals in the instructions -  I learned how to sync my brainwaves using the app on the ipad. I tried a 3 minute meditation in the living room while the TV was on, a laptop was playing a video in the background and I was talking to my wife who was doing her yoga exercises. My brainwaves during those 3 minutes were in the noisy/active category. I had scored no calm points and I heard zero "birds". Hearing birds means that your brainwaves are staying in a calm meditative space. Seeing a graph of my brainwaves is actually very interesting but scoring points for meditating well and being asked if I want to share that on Facebook or Twitter is another thing. Tempting though to show all my friends on social media what a noisy mess my brainwaves are - No!

I was sort of impressed with the app interface and the instructions by the MUSE meditation guide. The next time I tried it I sat in my meditation room on my meditation cushion and zabuton. I extended the time to 7 minutes. I chose the default beach imagery with the sound of lapping waves and wind. If you hear the wind, it is actually the sound of your own brainwaves making noise. You are not watching your breath. I sat in the half lotus posture with my hands in my lap, a classic meditation posture I have practiced for years. The resulting graph of my brainwaves after 7 minutes indicated that I had no active or noisy points - 98% calm state of mind and about 100 birds. I could actually hear the birds in the background if I turned up the volume.  Here is a picture of my stats. In my last 20 minute sessions the batteries in the MUSE drained and I had to resume twice so the stats are all thrown off.

It is getting interesting but I spent the rest of the day thinking that I have been under surveillance with my brainwaves subjected to mechanical replication and analysis. This experience was not at all a natural process, in spite of the kind and soft voice of the human guide behind the algorithms on the app. My gurus had years and years of training and practice in meditation before they were allowed to teach.  I didn't let that get to me because I am fascinated with the technology.

The next sitting session I tried 20 minutes - about the amount of my usual meditation time these days. The result was 100% in the calm space, over 200 birds, and no "recoveries" or straying outside the calm zone with distracted thought or lapse of attention to mindfulness of breathing. And that was just a "normal" session for me.

I am really impressed with this device but I am sure that I don't need it having learned the art and science of meditation the traditional way - sitting at the feet of the masters, going on retreats, and practicing daily. My real question and concern is how will this device work with digital natives and those new to meditation?

We live in a world of secular ethics and this device does not come attached to any religious ideology. We all know by now that a mindfulness of breathing practice cuts across the sectarian world. Creating calm brain waves just requires the right guidance and intervention. Is total reliance on the MUSE soulless and alienating?  Not necessarily, though I would probably recommend an online mindfulness of meditation course called Palouse Mindfulness rather than the MUSE for a true beginner - especially ones who are remote from teachers and centres and can't afford the cost. One of the practices in one of the major schools of Tibetan Buddhism is Lam Rim. Lam Rim literally means "gradual path". The gradual path to meditative calm is the best way.


Here is one tip from my Zen teacher on meditation that will help anyone understand the nature of mind and meditation. Sitting across from me at a table the teacher gave me a piece of paper and a pencil. He asked me to draw a small line to count each time I had a thought. It became obvious to me that the page would quickly fill up with counts of scattered thoughts. After sitting in meditation practice, the number of counts becomes noticeably fewer. Where did all those thoughts go? It is just a state of being.

Tuesday, February 25, 2014

The "sousveillance" world of Steve Mann

When I studied the use of RFID in healthcare I was amazed at the possibilities for this technology and it's essential humanness. An RFID barcode is much safer for an infirm patient because the identification or drug dosage on the RFID signal can be picked up without having to move the patient. A barcode, on the other hand, might be on a wrist under a sleeping patient, so they would have to be turned over in order to scan the bar code in line of sight. RFID technology was also great for keeping track of physical assets like infusion pumps, and inventory replenishment systems. On the other hand, keeping track of people presented some ethical and privacy concerns because people would be under the impression that they would be constantly under surveillance. When the word "surveillance" is used, Big Brother rears its ugly head.

Surveillance needn't be a fearful word even though it has a strong presence in security organizations and anti-terrorism. There are forms of surveillance in public health that can be beneficial for the health and welfare of society, such as syndromic surveillance, even though that too may have had some origins in state security, i.e. finding out where that anthrax threat was.

One thing I like about the wearable computer work of Steve Mann is his bold claim that the eye-tap or video glasses he created and wears present to society a form of what he calls "sousveillance", which is a much more nuanced, benign or human form of it's evil cousin - mentioned above. Sousveillance is an understated way of trying to balance the power of who is watching who. For some totally unknown reason it reminds me of the anti-sus dub poetry of Linton Kwesi Johnson. The anti-sus laws, or suspected person vagrancy laws in 19th century Britain might have nothing to do with sousveillance, but I am sure Steve Mann has had that feeling of being considered a suspicious and unwelcome person. Racial profiling for cyborgs? His McVeillance experience is indicative of that.

Now try to imagine a year in the future when everyone is wearing eye-tap video devices of that type Steve Mann and then Google developed. Maybe this is in 2020,( appropriate for seeing perfectly), and maybe it is not, but won't this mean that everyone we see on the street, and their dog, will be the equivalent of a Google Street View with a 24/7 refresh rate? And then ask yourself what does this do for for privacy laws, and you will have to wonder why the privacy commissioner of Canada wrote a letter to the lawyers at Google in 2007 to say that Google Street View would break all of Canada's privacy laws if it was implemented! It is interesting to try and imagine this future and one science fiction book I read by Charles Stross, called Halting State did exactly that. It was a murder mystery inside a video game but the real life police all had video recording visors they were obligated and/or controlled to wear on the job, recording all the visual details of their day to day investigations. Surveillance technology may not have been extended to all citizenry, but now the details are slipping away on me - read it a few years ago.

Notions of privacy will be changing beyond a doubt. Even now in different cultures there are different notions of privacy and proxemics. I think it was Iceland that lists your tax return information in the phone book or something like that. Imagine if we all started using Augmented Reality eye-tap devices, like the ones on the veillance.org website which are tied into redundantly backed-up servers. Imagine people walking through hospitals with such wearable devices scanning people sitting in the STD clinic waiting rooms. Personal space is being violated in terms of personal health information (PHI). The technology is wonderful though. As Personal Health Records are being developed (even with HL7 standards) a problem area is how to capture and store personal information submitted by the patient, not the physician, and how to make that information intelligible. Streams of data from daily blood tests, BP, and now possibly wearable computer video images, needs to managed and made relevant somehow. On the other hand, IT and policy specialists in healthcare have mostly normalized the Bring Your Own Device (BYOD) phenomenon.

Another notion of privacy that might need to change is the idea that PHI is always private. Some people are already posting their PHI on facebook and they don't care if it is public. In rare cases we have even heard that this has saved lives. I have personally heard research participants with rare and chronic health conditions who are posting their personal health records as widely on the internet as possible in order to obtain possible help or insight for future research. It is technologically possible I suppose to put PHI and other forms of identification into Augmented Reality "fields of vision" for other persons with wearable devices to readily pick up. The only thing stopping people from doing that is the notions of privacy and their willingness to consent to have that out there in the public domain.

I like Steve's distinction (on wikipedia - or brilliant IEEE article ) between surveillance and sousveillance:

Personal sousveillance is the art, science, and technology of personal experience capture, processing, storage, retrieval, and transmission, such as lifelong audiovisual recording by way of cybernetic prosthetics, such as seeing-aids, visual memory aids, and the like. Even today's personal sousveillance technologies like camera phones and weblogs tend to build a sense of community, in contrast to surveillance that some have said is corrosive to community.[29]
The legal, ethical, and policy issues surrounding personal sousveillance are largely yet to be explored, but there are close parallels to the social and legal norms surrounding recording of telephone conversations. When one or more parties to the conversation record it, we call that sousveillance, whereas when the conversation is recorded by a person who is not a party to the conversation (such as a prison guard violating a client-lawyer relationship), we call the recording "surveillance".

It is within this realm of "personal sousveillance" that the work of Steve Mann as applied to health informatics, is really to going to shine. Steve  was one of the original group who helped secure funding for the Centre for Global eHealth Innovation at the University of Toronto, which is a world leading health informatics incubator. Steve has also done some research using sousveillance on hand hygiene to reduce hospital infections. There are other more bold applications, of course, like using google glass in surgeries or dentistries for training and/or assisted learning.

In my own small way I am also trying to think through the "legal, ethical and policy issues", as Steve says, here on this blog. Those at the Institute for Ethics of Emerging Technology are also doing that "in spades", and there is a recent article about Steve Mann and sousveillance on it (here). Steve has recently argued for "legal" rights for sousveillance in an editorial for MIT technology review. Veilliance has become a study in itself, in all it's various forms, as Steve leads a Veillance conference and research group, which it would appear I made a blog post about last year< here >.

I could also blend in here a discussion related to the ethics of self-experimentation (and hat tip again to the folks on the CAREB Linkedin group for that article). Mostly we have known about clinical self-experimentation, and in social sciences/humanities there are '"autoethnographies", but now with the development of new technologies people are trying their own DYI experiments.  I saw an TVO Agenda program (Mysteries of the Mind - Tomorrow's Brain ) that discussed the health benefits for improving cognitive function and mental health using Transcranial Magnetic Stimulation (TMS)  where the panel experts played a youtube video they had discovered and discussed the guy in it who hooked his brain up to his own home-made TMS device. In the video we see the guy, when he turns on the electricity, explaining: "Just saw a white flash". So don't do this at home kids!

Steve Mann is not a guinea pig. He isn't a research subject. He is the subject of his own research. Developing and wearing computers is something he has done since he was a kid, so he is just using evolutionary momentum for whatever agile developments that improve his cybernetic state of well being. An oversight committee at his place of employment might recommend a technology ethics review, but we have to think that Steve is largely "self-employed" with this system, "dug in like a tick", and there ain't no separating him from this life experiment with digitally enhanced awareness. Anyway, Steve would fight back against anything "oversight". The dangers of any research involving humans is that researchers to a certain extent "have blinders on" and are biased towards their own methodologies and perceptions of risk, and thus lose objectivity.

I don't know who said "the pull of the future is greater than the push from the past", but I do remember the person who I heard it from. Whoever it was must have imagined some strange and distant world waiting to be born. That is the sousveillance world of Steve Mann.






Sunday, January 19, 2014

A different kind of google glass - contact lens that detects glucose for diabetics


Google X is a "moonshot" group of experimental projects Google is exploring. A recent news story about one of these projects is hitting the media called Google Contact Lens. The premise behind this is one of the holy grails of diabetes research, finding a "pin-prick-less" way to test for glucose levels. I did a study of the various devices under-going development and the history is a bitter one of trial and error, fraud and failure. To my knowledge, there is no FDA approved device yet that can do this. I will set up an email alert for more news about this in the future. CBC technology coverage is great. Here is some info from our working paper on a mobile solution for self-management of diabetes:


A non-invasive technique capable of measuring blood glucose concentration with accuracy equal to or better than the current chemical glucose meters may improve compliance for glucose monitoring. 53 Considerable efforts have been made by several scientific research groups and organizations in the past few decades to develop non-invasive blood glucose monitors.  Diverse optical approaches have been proposed to achieve this objective. These approaches include polarimetry, Raman spectroscopy, near-infrared (NIR), absorption and scattering and photoacoustics. These techniques appear to be promising, but have limitations associated with low sensitivity, accuracy and insufficient specificity of glucose measurements at physiologically relevant levels. 53 Non-invasive continuous Glucose Monitors like GlucoWatch G2 Biographer and Continuous Glucose Monitoring (CGM) which are FDA approved have been found unreliable for detecting hypoglycemia. 54 There are non-invasive solutions available in Canada for measuring blood glucose level by BioSign Technologies’ UFIT Care. 25 However, this product is yet to be approved by Health Canada and therefore, cannot be used.
And various references to the above:

23. Medgadget. MedGadget Web site.
http://www.medgadget.com. Published 2009. Updated 2009. Accessed november 2009.
24. Pain-free precision: Clinical trial reveals new option for blood sugar testing. . 2002;1 No 2.
25. Biosign Technologies Inc: Online Health Monitoring, Getting the Numbers Right Fact Sheet. http://www.biosign.com/Web_Files/factsheet_biosign.pdf. Updated 2009november 2009.
53. Kirill V, Mohsen S, Montamedi M, Esenaliev R. Noninvasive Blood Glucose Monitoring With Optical Coherence Tomography. Diabetes Care. 2002;25(12). http://care.diabetesjournals.org/content/25/12/2263.abstract.
54. Accuracy of the GlucoWatch G2 Biographer and the Continuous Glucose Monitoring System During Hypoglycemia. Diabetes care. 2004;27(3). http://care.diabetesjournals.org/content/27/3/722.abstract.