Search This Blog

Friday, April 6, 2018

Blockchain for Healthcare and Clinical Trials

Blockchain for Healthcare and Clinical Trials by Manohar Rana



The article is based on our 3rd place winner's proof of concept presented at Generation Blockchain Challenge.

          In general, healthcare and clinical trials are complex business environments mainly due to its direct impact on the human lives and various regulations built around them. There are various stakeholders in the entire ecosystem, and the need to improve on how these stakeholders collaborate and communicate with each other is ever increasing. Technological advancements from time to time have made significant improvements, but due to slow adoption of these technological advancements in healthcare in general, there is a great potential for newer technologies like blockchain to bring significant improvements in the overall systems.
          Healthcare organizations have made significant improvements through technological and process innovations that have benefitted and improved the entire customer experience. The most important customer in the ecosystem is a patient, and the entire healthcare business is centered around this customer. The ultimate aim of the various players like physicians, clinics/hospitals, pharmacies, drug manufacturers (pharmaceutical companies) is to bring value to a patient and enhance the overall customer experience. Then there are regulatory bodies like Food and Drug Administration (FDA), that oversees all these players, ensure the rights of a patient are protected and that they not misused in any way. A patient is the end consumer of the benefits in the entire value chain.
          On the other hand, in clinical trials, the drug manufacturer companies actually partners with human subjects aka patients to try their trial drugs on them before they bring the new drug to the market. Some of the key players in the clinical trials process are the Pharmaceutical company or the drug manufacturer, Contract Research Organisation (CRO) and Site Investigators (Physicians). Institutional Review Board (IRB) act as a regulatory body under the FDA. Since the other actors in the ecosystem are organizations that have their own technological infrastructure, the subjects remain at the receiving end. They have a limited role to play in the entire process and is limited by the technological capabilities of other's systems. Regulatory requirements make Organizations business systems slow, complex and inflexible. Generally, both healthcare and clinical trials partners have greater needs to collaborate and share the information through these complex systems.
          Attempts are made from time to time to come up with centralized systems that can facilitate greater collaboration and quick information sharing, but such systems pose their own challenges of ownership of data. Integrating data from different systems owned by different parties is a challenge. One alternative way could be to try to connect the trusted parties that are known to each other on a common platform. Blockchain technology has the potential to play that role. It may be too early to predict what role blockchain can play since there are not enough use cases that are being tried upon. It is difficult to say if Blockchain can displace the existing systems completely or complement them for some time before it actually does that. The objective here is not to speculate that possibility of whether Blockchain is a replacement for traditional Clinical Trial Management Systems but to explore the possibilities of small use cases that can actually bring value to the entire ecosystem.
     
Before we discuss how Blockchain can play an important role in clinical trials, it is important to understand the current challenges in the healthcare and clinical trials.

Few of the challenges in clinical trials are:

1. Subject Recruitment: To ask and convince a healthy subject to try a new trial drug is a challenge. There could be different motives for a healthy person to take that risk for monetary or personal reasons. Sponsor's find it very difficult to identify and recruit ideal subjects. A lot of times, the self-reported information provided by subjects cannot be authenticated leading to issues like dual enrollment, false disclosures, higher screen failures, a potential risk of severe adverse events (SAE's), and lawsuits leading to increased cost and bad quality of clinical research trial data.

2. Conducting trials: Sponsors make changes to the study protocols modifying inclusion and exclusion criteria mentioned in the study protocol after the study has started. At certain times the changes are genuine but sometimes the changes are made to widen the inclusion criteria or narrow down the exclusion criteria so that more subjects can be recruited easily.

3. Lack of trust and transparency.

4. Challenges in collaboration and communications.

Blockchain will increase and establish the trust in clinical research by the fact that tempering and manipulating the research data in blockchain is very difficult and easily traced. Self-reported data by the subjects generally lacks trust, which ultimately impacts the quality and cost of the drug trial. There is lack of trust in the way clinical research data is gathered, analyzed, and reported. Trust is further decreased because of unethical and unprofessional practices such as altering and not reporting the inclusion and exclusion criteria in a protocol to suit the interests of drug manufacturers. The timestamped block transactions can be easily traced and verified, making it less prone to manipulation and tempering. It would be worth reading the article about blockchain timestamped protocols here.
Blockchain will increase the transparency, collaboration, and communication in clinical trials. There are many partners in the clinical research ecosystem like Pharmaceutical companies(sponsors), CRO’s, study investigators (Physicians), hospitals, laboratories, insurance providers and patients, and there is a great need for all partners to collaborate and communicate effectively because human health is at stake.  The challenge is that every partner has their own technology systems which limit their ability to communicate effectively and efficiently. A lot of time and money is wasted in requesting, transferring, and communicating the information between different systems.
Blockchain brings all the trusted parties in the ecosystem to a common platform enabling them to see the clinical health records flowing through the system in real time and make timely decisions.
Not only that, Once the identity of a subject is established in the blockchain network, blockchain also addresses the issues related to subject’s dual enrollment in multiple studies at the same time saving the subject from being misused and exploitation. It is very difficult to find if a subject has enrolled in other studies. Ed Miseta, in his article, has highlighted the issue of dual enrollment in great detail here.
From sponsor’s perspective, it saves them lot of efforts wasted in subject recruitment causing higher screen failures.
Another important aspect of blockchain is that it enables a patient to play an important role as a participant. Currently, a subject is always at the receiving end of the value chain and has very limited or no access to his information. For example, in case of an adverse event, once a patient’s adverse event is notified to the physician, the patient has no idea how his case is followed up by a physician with other stakeholders. Blockchain system facilitates a patient to become an important participant in the whole ecosystem.

The inherent architecture and advantages of blockchain will make various processes and systems irrelevant and unnecessary, making the overall process of clinical research simple and cost-effective. The direct impact of this will be that it will help in bringing down the overall cost of bringing a new drug to the market, which ultimately will be passed on to the patients. More importantly, a subject would become a key participant in the clinical trial process and would be saved from misuse and exploitation.

Blockchain technology has the potential to bring disruptive changes in healthcare and clinical trials, that would make many of the current processes and businesses obsolete. It's in the best interest of the entire industry to explore the opportunities blockchain provides to remain sustainable in the longer run.

Thursday, December 1, 2016

Tableau - Implementation Challenges and Best Practices

Hi All,

I thought of sharing my leanings and experiences with Tableau so far.

This post will describe some of the challenges you could face while implementing Tableau BI in your enterprise from architectural standpoint.

If you are evaluating BI tools or planning to start implementation, you will definitely benefit from this post. I would be highlighting some of the best practices that you can include in your list.

Tableau is flexible when it comes to playing with and analyzing your data. It gives you complete freedom to choose and connect to your data source, and quickly start building those nice Viz (reports or charts or dashboards).
You can do pretty much everything to join the data sources in a SQL, put filters to restrict your data. If you are a data analyst, you can build some really compelling data visualizations or charts in a very short span of time.
Now you show those nice visualizations with your team or department and they too get very exited.

Till here it was all cool stuff. The challenges starts from here.

1. Do I don't need a Datawarehouse star schemas.?
Datawarehouse star schema contains Fact and dimensions that gives you enormous benefits in simplifying your implementation. You won't believe how it can benefit in terms of performance, scalability and maintenance.
Some may argue that Tableau doesn't need any kind of warehouse or these fact and dimensions star schemas.
Well, if you are really a very small enterprise then you may not need it but otherwise if you have good amount of data and have various source systems and applications, then do not build your BI without a datawarehouse. Or sometimes, your organisation has a warehouse but as a data analyst you may be tempted to NOT use it.
Since Tableau does not have any centralized metadata layer, users are free to create their SQL the way they want. This freedom proves costlier in long term strategy.
Developers build their SQL's on top of OLTP or normalized data structures and the result is you have highly complex SQL's with large numbers of joins giving you poor performance.
Very soon you will have hundreds of those complex SQL's with lots of duplicate data/information where one SQL may differ from another SQL slightly. It's not so easy to debug those complex SQL's to make any additions or alterations. Now you understand how difficult it would be to maintain those SQL's.
Star schema reduces those joins and makes your SQL very simple, and of course the performance is way better.
Tableau can extract the data in extract mode and improves the performance to some extent but do not just ignore the other benefits.For some reason in future if you need to make your application in Live mode then you may need to completely redesign it. Such reasons could be more frequent data refresh or implementing row level security for which you need to have Live connection for your Tableau application.

2. Temptation to put ALMOST everything in one Tableau Workbook:
When you start creating an application, you start with small dataset providing answers to very limited or few business questions. This is what tableau is built for.

Slowly when more and more people starts looking at it, they start asking for more and more information. This is when we start adding new data sets, joins, transformations and conditions. And our application starts growing from all angles.
It becomes more complex, performance goes down and it becomes difficult to  scale.
If we take a break here and plan things, we can do it in much better way.
Once we realize that our application is growing, think of going to point no 1 above of creating/extending the dimensional model.
You need recreate your application using a dimensional model. If you think about this early, you will reduce the amount of rework you would have to do.
The ideal design would be to do all the data analysis/discovery using your source systems structures (assuming you do not have a warehouse or the required information is not present in a warehouse at all).
Utilize all the freedom Tableau provides here. But once you start thinking of making it available for mass consumption by enterprise users, design the required subject areas (Facts and dimensions) or extend the existing ones.
Build your application now using these subjects areas. Your application would be simple, fast, scalable and easy to maintain. Since the new SQL would be using less joins, fewer calculations and aggregations, it would be fast and easier to read.
You can now imagine the benefits. If you need more data elements or metrics, simply keep adding them to your subject areas.
This will enable you to extend or scale your application to a greater extent BUT this does not mean you can still put almost everything in one workbook.
Definitely there is some more work here but I am sure you would appreciate the benefits it would bring in the long run.

3. I Still want to put almost everything in one Workbook:
You may be wondering if I am against that. Well I am not.
There are many instances where we need to have information to be displayed on our dashboards side by side that may be coming from different subject areas or Stars but there are certain things we need to consider and remember.
Since Tableau does not have a Semantic layer (aka Common Enterprise Reporting Layer), we need to have all the tables added to that one workbook as Data sources.
Here the grain of the data plays an important role. If the grain of the data is same then all can fit in one data source/SQL.
But if the grain of the two data sources are different and there is a need to have an interaction between these data sources then the real trouble starts.
When I say interaction between these two data sources, I mean to say that we need to pass common filters between them or need to show the data coming from these two data sources into one Viz/report.
When we need to have an interaction, we need to have a join between these two data sources. Tableau allows joins across data sources or perform blending but it may prove to be very costly in terms of performance and stability.
You would be surprised that even if individual queries have sub second response time, after applying the join the response time may be in minutes.
If your individual queries have limited or small data, it may work for you in some cases.
Better always test it out. Even Tableau experts suggest to avoid using the blending.

4. OK. what is the Solution then:
I know its frustrating when we talk about limitations only. Here it is also important to understand why such limitations when Tableau is such a nice tool?.
Well, Tableau is a tool for data discovery. Quickly go grab your data and starts visualizing it. Maps are inbuilt and required no configuration like in many other tools. But once we have built those nice dashboards we need to make it available for the enterprise users. Tableau can do certain things here but its not made for that. Now you are trying to make it do something that some enterprise BI reporting tools such as Oracle OBIEE or Business Objects or Cognos are just made for that. These tools can do some data discovery but not the way Tableau does, similarly Tableau can do some dashboarding but not the way they do it.
Here I am not comparing Tableau with them since they are not comparable and have totally different use case and technologies.

5. What else can I do to?
All right. Here is the solution.
We need to design our Tableau workbooks and dashboards intelligently keeping in mind the limitations.
Think of having a common landing page workbook with hyperlinks to all the other applications. Think of having some very common filters on your landing page. So your first workbook have just dimension data for those filters.
Now you can also think of making one or more of these filters mandatory meaning users need to have a filter value selected in order to go to a specific workbook/dashboard.
This would help in cases when your workbooks/dashboards have tons of data and you want to avoid just showing all of that data and slow down your application.
Now, you can build your simplified workbooks based on individual common subject areas and link them to your landing page.
Since Tableau allows to pass the filters between workbooks, you can pass the common filters from one workbook to another.
There may be certain cases when we want to have a dashboard/report having data from 2 different data sources and in those cases you can consider blending. I know I said Tableau experts suggest to avoid it.
See if blending works fine for you else think of creating a physical table in database combining the two sets of data having different grains.
This table will have data at both the grains and some indicator column will tell the row has data for which grain. you will find any example on the web for such cases since this issue is not specific to Tableau but common to data warehouse.

6. THAT'S IT?
Well I guess So until something comes to my mind. Please post your comments and questions, and share your thoughts and experiences.

Thanks for reading.
Manohar Rana



Saturday, May 28, 2011

Qlikview is now a Leader

Hi,

As I was expecting, Qlikview joined the leaders quadrant in Gartner's magic quadrant for BI 2011.
Qlikview is cited as a self contained BI Platform and the strengths being interactive, great visualization and end user friendliness and satisfaction. I am very happy about it.
But I am more focused on seeing the challenges ahead. It will be interesting to see how Qlikview maintain that position and stand in the competition.
The challenges cited by Gartner are
1. Lack of expansive product strategy
2. Limited metadata management
3. Lack of broad (high volume) BI deployments
4. Lack of Microsoft office integration
5. Poor Performance when data volume and number of Users increases.

The findings are not new and Qliktech surely needs to seriously think about these shortcomings.
I want to discuss further on the above points in detail.

1. Lack of expansive product strategy : To compete with large vendors like Oracle, it becomes very important to have a competent product expansion strategy. Oracle has very aggressive product strategy and has a vision to integrate its various offerings like Oracle BI, Hyperion Essbase, Oracle Enterprise performance management and more importantly their pre built analytic models popularly known as BI Applications. Though Qliktech has already taken one step in this direction by targeting application vendors like Salesforce and can offer pre built models for Salesforce customers but this is not enough. Qliktech has to work agressively in developing such pre built models for other but big applications. EPM is one area which is still untouched and lack of vision in this area can be disastrous and will simply throw Qliktech out of competition. Vendors not only should now think of Softwares but also start thinking about offering Hardware configured for optimum and enhanced performance. Oracle has got its popular Oracle Exadata, its database pre configured with HP's hardware and is agressively promoting it.

2. Limited metadata management : Qlikview offers limited metadata management capabilities and the primary reason I see is because Qlikview is focussed on small scale or much smaller than average size deployments, it did not see much relevance of metadata management. This can be dangerous to them as well as their clients as when they grow, they will start seeing the need for it and would require the investment they tried to save at the beginning. Even if Qliktech decides to go for building its capabilities in metadata management, the basic problem for them will be to start believing in OLAP dimensional building which will be against their basic principles. Qliktech market its product as a non OLAP tool which actually is not and treat the underlying data as a cloud in the memory. Hence when it will see the need for conforming dimensions to do cross functional analysis, it may become a matter of choice rather than a matter of capability.

3. Lack of broad (high volume) BI deployments: For Qliktech as mentioned above and as cited in Gartner's report, the major challenge will be to deploy large scale applications. As of now they have proved their capabilities in small or much smaller than average scale deployments and I think that is what Qlikview was made for. One of the Qliktech's selling point is that Qlikview do not require a datawarehouse. Now this same selling point will stop them to move ahead or prove their capabilities in average and large scale deployments.
For those who want to know why, please read one of my earlier post here
This again will depend on reviewing its sales strategy and making corrections to their basic beliefs which is not going to be an easy task. If they do not start using the terms datawarehouse and OLAP, it will difficult to maintain the Leaders position.

4. Lack of Microsoft office integration: This is something I have mentioned in one of my post in Year 2008 read here. It seems Qliktech is least bothered. Its current capabilities are very basic in terms of simple export to MS Excel. In coming releases if it do not develop such capabilities, it will he hard for Gartner or Forrester to give a space to Qliktech in their reports and compare Qlikview with Oracle or IBM. There are many more such features which I have mentioned in my post earlier. Some of them which are important according to me are building connectors for their proprietary QVD and QVW files so that their models can be available to other applications, SQL generation queries to help developers in debugging etc.

5. Poor Performance when data volume and number of Users increases: This is again linked to point number 3 above.

Feel free to post your comments or thoughts.

Till next time

Manohar Rana