Five Dreadful Ideas Healthtech Innovators Must Avoid At All Cost!

13 min read
Ayinla Daniel Avatar

(Founder & Editor)

Share this Article

In the world of innovation, there are attractive ideas that innovators, investors, leaders, and entrepreneurs must avoid.

And it’s not because these ideas are bad.

In fact, most of them are excellent and exciting ideas that have a lot of problem-solving potential.

The problem usually boils down to factors like execution, strategy, and, most importantly, what I call seasons.

While many exciting, innovative ideas exist in the health tech ecosystem, successful implementation requires patience, strategic planning, and a keen understanding of timing.

People have not yet developed a culture that can accommodate these innovations, and our current global technological infrastructures are not mature enough to support them.

Most of these innovations are only suitable for research and tend to be handled by large tech companies that can invest millions in research and development to prepare for the future.

Creating consumer products or services based on these innovations is often impossible or difficult for smaller innovators or investors.

The high cost of research and development, coupled with the lack of a mature market, makes it impractical.

Let’s take artificial intelligence as an example.

Twenty years ago, no one would have dared openly discuss innovating or investing in AI because its season had not yet come.

Though leading global companies sponsored many research projects, no one was actively openly innovating and creating consumer-ready AI products.

Come 2022, and all of a sudden, Chatgpt goes mainstream and the AI boom begins.

Why? The season had come.

Global technological infrastructure began to mature to accommodate the computing power needed to host AI technology and systems.

Though we still have a long way to go before AI becomes practically ubiquitous like the software we are used to and does not cost a limb to operate as it does now, we can boldly claim that we are virtually on our way to a world where AI will be a part of our global digital ecosystem.

Companies will now design and build new hardware and products with AI in mind.

Today, there are many health tech ideas that innovators must avoid at all costs.

I came up with this list partly from my experience writing and researching health tech and digital health (and also from being active in building a few products and services) and studying the perspectives of global leaders and experts in healthtech.

Most innovators may have at one point considered how these ideas can transform healthcare and maybe even ventured in, only to be confronted by universal challenges.

Here are the ideas health tech innovators must avoid; most experts call them “Tarpit Ideas.”

These “Tarpit Ideas” are indeed attractive and have potential, but their complexity, difficulty, or futuristic nature make them challenging to build or implement at this time.

Now, let’s dive in.

Personalised Health Dashboards (Patient-Owned Health Data – POHD)

I know most of us have thought and dreamed about this as health-tech innovators: dashboards where users can track all their vital physiological metrics at a go, connected to all sorts of wearables and gizmos.

We have even advanced this idea by exploring ways to connect data with AI analytics, providing valuable insights for individuals to manage their health.

Yes, I have also thought about this idea a hundred times. And this idea is wonderful.

In fact, it can potentially transform healthcare and bring personalised care into perspective.

However, it’s unrealistic (now) and very difficult to design and build.

Why?

Most people who are highly committed to tracking their health with personalised dashboards fall into one of a few categories: they might be managing chronic illnesses. They could also be healthy individuals with a strong interest in their healthcare, or they might be wealthy people—who invest a lot of time and resources in their health (Additionally, most personalised dashboard systems/tools are essentially preserved for research purposes and not entirely commercial).

However, these groups represent a small fraction of the overall population.

Developing consumer-ready products or services based on such solutions makes it challenging.

Furthermore, when you consider the engineering required to integrate wearables, lab results, electronic health records (EHRs), insurance, and other essential background systems, the task becomes even more daunting.

There isn’t a market for this until we get to the point where the healthcare culture pivots to preventive healthcare, where people are more interested in monitoring their health to catch potential problems before they happen.

But for now, we are far from creating universally successful products or services based on personalised health dashboards.

To show you how difficult this is, Google had to pull the plug on its first Google Health project, alongside Microsoft (they shut down their Microsoft HealthVault Project in 2019) and a host of other global tech companies that thought they could make a big harvest in this area.

It’s difficult to push innovations that are not yet part of people’s culture.

Once we solve global interoperability issues (which is the foundation of anything personalised health), we can start to journey gradually towards the era of personalised health dashboards.

For now, our focus should be on how we can successfully integrate AI into basic healthcare structures.

Wearables Are Still Fancy!

Wearable gadgets like smartwatches, patches, rings, etc., are just very good at collecting data.

For the data collected to be useful, it must be able to reach points where it can be processed into real action and provide actual value.

Even the biggest providers of wearable technologies are still trying to figure out the value.

The reason why it’s difficult is universally a cultural (user and provider) and infrastructural problem—insurance issues, interoperability, data ownership and privacy, you name it.

Artificial Intelligence Diagnostics (General-Purpose AI Diagnostics)

As a healthcare professional, I understand at a deeper level that AI will never be able to replace healthcare professionals—at least not the type of AI we have now.

Healthcare is very complex and personal.

It’s not writing code to help assemble vehicles in a factory.

We are dealing with human beings, and we’re incredibly different in how we react, respond and embrace care.

We may be similar in biology and anatomy (to some extent), but our emotional fibres and structures are entirely different.

How will AI be able to quickly adjust and adapt to new responses and needs from different people at a time?

AI will excel in narrow use cases like radiology and pathology (it’s already pulling out the ceilings in these specialities).

Yes. No human doctor will ever be able to outsmart the dullest AI models in detecting patterns in millions of images or data. It’s not possible. So, in that area, our Large Language Models will excel.

However, when it comes to connecting intricate dots not found on paper or in images but formed from unseen and intangible emotional connections, AI will never be able to keep up with any trained human healthcare professional.

It can support, enhance, and even help speed up diagnostics, but completely replacing the diagnostic abilities of doctors and healthcare professionals is impossible for now.

Until we have AI models that can understand the intricate emotional language of human beings, I don’t think a bunch of code can ever understand emotions unless we discover how to squeeze emotions into code.

So, if you’re considering innovating in general-purpose AI diagnostics, you must understand your limitations and scope.

We’ve read of the lofty ideas of startups that wanted to “replace doctors and nurses” but couldn’t pull it through after a few years, throwing millions down the drain.

People want to connect with people, not intelligent language models.

They can decide to receive help from AI, but there will be corridors where they will need to hold human hands, look into human eyes, and speak into human ears.

IBM Watson Health project is a typical example of how difficult it is to attempt to replace human healthcare experts with AI.

Apart from issues like high cost and privacy (the biggest challenges in health-tech), there is also the issue of trust.

We haven’t gotten to that point yet when people can wholly trust machines to do one of the most essential jobs in healthcare—diagnose their illness.

Blockchain For Health Records

Now, to one of the most talked-about health tech solutions of the past decade: using blockchain to enhance and transform healthcare records.

I have been a big advocate of secure and fluid healthcare records for a long time.

We even tried to start a campaign to educate people about blockchain in healthcare (it didn’t work. People don’t understand what it means, even the professionals themselves).

For blockchain technology to truly become effective in healthcare data management, our systems must have attained up to 80% interoperability, where healthcare data flows seamlessly across all forms of healthcare institutions and points of health data collection, from smartphones to wearables.

However, as it stands, hospitals still completely control healthcare data, and to them, that data is their power—if not the biggest power they have—not the equipment or expertise, but the patient data. They want to be the ones controlling it.

If you introduce full interoperability, it means others can also, at one point in the patient data journey, tap into the potential of that data and lay claim, diluting their power and share.

It can also give the patient ownership of their data, which is obviously rightfully theirs, and blockchain makes it super easy for patients to own and control their data (in theory).

But we are still far from global interoperability, and blockchain technology is still complex, bogus, and porous.

Many don’t even trust the system (think about how easy it is sometimes to hack crypto ecosystems and disappear without a trace; imagine someone stealing health data from a nation and running away with it).

Blockchain will only add to the existing technical maze, complicating the whole issue of interoperability.

At best, hospitals can create their own closed blockchain ecosystems for healthcare medical records in a simple, easy-to-use system that operates only within their digital environment. Only institutions with the expertise and resources can even attempt this.

So, if you’ve been excited about blockchain’s potential to solve most of the problems with health data, ensure you do thorough research before venturing.

It’s an excellent idea, futuristic, but I feel the season has not come yet.

Maybe the AI revolution can also speed up the evolution of how we manage healthcare data, paving the way for another amazing revolution in healthcare data management, like EHR did some decades ago.

Medicalchain and BurstIQ are examples of global companies that tried to use blockchain technology to transform healthcare data but discovered how difficult it is, also considering that the time for this kind of technology may not have come yet.

Genetic Risk Score (GRS) Consumer Technologies

Our genetic codes hold amazing physiological data that can help predict the future of our health.

Fantastic idea, brilliant and futuristic, but a successful idea isn’t just meant to be brilliant; it must have economic value, and most people don’t care about that information because we’ve not entered that culture of preventive healthcare yet where people are interested about preventive measures than curative and most people who knew where not sure what to do with the information, maybe use it as a form of the physiologic trophy—hey, I have the DNA of Superman! Or I have a rare genetic condition. Ok?

And there isn’t much intervention available for most of the insight harvested.

The market may be for those who are just super curious, the ultra-wealthy who want to live long and forever, and maybe for some research purposes.

There are people who have been able to detect that they are at risk of Alzheimer’s using Genetic scoring, and they have started to adjust their lifestyle, but how many people are interested to the point that it becomes a viable product and service? Not many.

I am not interested in knowing my risks of any type of disease. I don’t even know how many people close to me are also interested—coupled with the fact that this type of idea is extremely expensive to develop.

It can become something in the future when preventive care finally gains more traction as more people understand and are ready to accept it, and as systems grow and mature to put the required infrastructure in place to accommodate preventive healthcare.

The present healthcare model we have will take some time to evolve to a state where it becomes preventive healthcare-friendly.

The consumer genetic sub-ecosystem is a tough place to survive in.

23andMe, a popular DNA testing company, couldn’t keep up. As usual, it’s not that they were building a bad idea; they were building in the wrong season.

AI For Mental Health Support/Care

The same principles apply to AI for diagnostics. But it’s even more complex here because we are talking about something very intangible—Mental Health support.

What can a chatbot or even an advanced model offer when I am depressed or anxious?

I wonder.

Some pre-installed answers and some algorithmic answers scraped from millions of other answers or responses? That’s all it can offer. It doesn’t matter how advanced it gets. There’s a limitation, a gross one.

I am a big advocate of AI. However, we must understand that AI has its boundaries, especially regarding deep, genuine human interactions.

Mental health is one of the most delicate aspects of healthcare.

It requires patience, understanding, deep expertise and genuine connection.

An algorithm will never be able to understand what patience is.

Even our most advanced supermodels will never understand emotion.

These algorithms may be super smart and fast, but they don’t understand what a human being feels. Feeling is a gift and a tool.

In healthcare, good healthcare professionals are those who can connect with patients on a genuine emotional level.

Startups that tried to develop AI-powered mental health support systems found out that, at some point, they will always need a real human to intervene.

AI may have its place in mental health support, but its place is limited. As I mentioned, if you are planning on venturing, you must understand the scope and keep in mind that people will still prefer to communicate with humans rather than just intelligent algorithms.

We may be able to train smart algorithms to offer some help or assistance, but we must know that they can’t replace genuine human connections.

Besides, we are still far away from reliable mental health AI that can even partially fill the gap for a well-trained human expert. No research has yet been able to provide concrete evidence.

That’s it.

These ideas are not bad, as you have seen for yourself.

They are just complicated to execute and difficult to monetise, and most of them have not entered their seasons yet.

Health-tech isn’t just about solving problems. It’s mainly about solving the right problems at the right time.

You can solve the problems of 100 people, but that’s not enough. To be commercially successful, you must solve the problem for more people, and if you can’t solve it for more people at scale, then there’s a problem.


Join our growing community on Facebook, Twitter, LinkedIn & Instagram.

If you liked this story/article, sign up for our weekly newsletter on Substack, “Care City Weekly”, a handpicked selection of stories, articles, research and reports about healthcare, well-being, leadership, innovation, entrepreneurship and more from leading websites, publications and sources across the globe delivered to your inbox every Saturday for free. 

Build & Grow With Us:

Media Kit.

Events & Webinars.

Care City Media Partner Press.

Guest Author & Contributor Porgramme.

Ayinla Daniel Avatar

(Founder & Editor)

More From Ayinla Daniel