Friday, May 10, 2013

FHIR: Let's Make Things Difficult Again

More interesting material on Eliot Muir's Interfaceware blog, in a thread on the topic of FIHR integration. Here Eliot proposes a strategy to counteract the looming redifficultization of FIHR, by opening up and democratizing the standards process and thus lowering the transaction costs in collaborating: 
V3 and CDA proponents [along with proponents of other standards] could register resources and see who uses them. Over time natural selection will occur – people will gravitate to the resources that are most common and useful. … Some resources will be easier to do quality validation on. This will impact on the value that people see in those resources – for instance very large CDA documents are difficult to validate effectively so that may impact on the value that people see in them.
FHIR as currently conceived, in contrast, is to be, like other parts of HL7, centrally defined:
I’m just sitting on a group that is looking at device integration with FHIR. It’s a beautiful example of how problematic it is trying to make centrally defined standards to handle device data. There are so many different types of devices and the area is always changing. On the other side most of the EMRs don’t have the capability to display the data.
On why healthcare information standards need to be as simple as possible:
… the complexity and problems of the healthcare business itself provide more than enough craziness. Also crazy healthcare standards tend to help competing vendors that are willing to hold their nose and make specialized tools to deal with the niches they helped to create. 
… Truth is that there are lots of incentives for different economic actors in healthcare to make interoperability and standards more complicated. The reality is that some times these create barriers to entry for competitors. What happens though is at some point a tipping point occurs when suddenly it becomes in everyone’s interest to make inter-operability easier rather than harder. 
You have to look at the players realistically to try understand what their motivations are. Follow the money. One area that people make money in is being a consultant to government organizations in terms of helping them with developing interoperability with all the various programs they want to do. There are perverse incentives to make the standards a little complex and obtuse because it creates barriers to entry for competitors to do the work you want to do.
Of course you can have too much of a good thing. One beautiful example was the NHS Spine. I had the joy of reading the spec on that one time to see if we could implement it. It was a nutty amalgam of LDAP, V3, ebXML combined into an impenetrable mess. 
There were consulting companies associated with the development of that spec that had then developed products to implement connection to it. They were a little too effective; though in the sense I think in the end the NHS saw through it all and realized that it was too complicated to succeed. It was a classic example of ‘consultant capture’.
What would it take for FHIR to be successful:
... all participants have to able to only use the tools they are already comfortable with. One of the numerous reasons that V3 failed to get widespread market adoption was that it required a large investment in time to learn all the tools that were developed internally by HL7.

… An engineer in Mumbai that is working on a glucometer for medical device start up needs to be able to easily find or create a FHIR profile and be able to discuss it online without finding the equivalent of 3 months salary to come to WGM in order to find out they will need to come to six meetings and lobby like crazy before they even get a shot at contributing to the standard.
One of the things I noticed at the HL7 working group is how daunted a lot of the people were with even the current full tool set get going with FHIR. You have to install subversion and have the right version of Java. All of this stuff comes with a whole learning curve associated with it. Right now I can’t even really locate where the tools can be downloaded from the FHIR site.

Sunday, April 21, 2013

For people outside of healthcare what HL7 does with your data seems quite crazy -- and it is!

Eliot Muir, responding to a comment to a recent post on his own Interfaceware blog, makes a number of interesting points on the difference between what you can do with a database, and what you are allowed to do with what he calls 'traditional HL7' :
The biggest thing about interfacing is that one way or another you want to get ‘random access’ to 100% of the data contained in any given application.
    Your data – when you want, how you want it, where you want it.
    For hospitals that have IT systems that only offer HL7 interfaces, the smartest way to get data in and out of those things is usually to ignore the HL7 interfaces and just query their databases directly. That’s extremely common for many health care providers I know of – it’s far less complicated. The problem is that the HL7 interfaces often only give access to a subset of the data contained in the database of any given application.
    A second problem is the issue of coupling the data which is exposed to work flow events. i.e. admit, discharge, transfer etc. We’re so used to it in healthcare that it becomes second nature – we think that it’s normal to have to mirror data by always listening for events and faithfully recording the data – heaven forbid we should ever miss a message! Coupling data to events also leads to another headache with HL7 – so called ‘gap analysis’ where people pour over HL7 messages trying to determine for a given desired workflow sequence if a given message has the required data.
    For people outside of healthcare this all seems quite crazy – and it is!
    If I want to look up the demographics of a patient XYZ then I should just be able to query that information anytime I like. You can do that with a database – although that makes most application vendors squirm when you do that. Partly the reasons for that are legitimate technical concerns of violating the integrity of the application. Partly it’s for business reasons as alluded to above.
    ... I personally have no confidence that the IHE HL7 profiles will do anything of significance to improve actual integration outcomes in healthcare. It’s pretty crazy how much money has been put into IHE and how little it’s produced in terms of outcomes in real hospitals.
    I think there are several reasons for this. One reason is that version 2.X HL7 has the above flaws. IHE HL7 doesn’t fix the flaws – instead it erects a shrine to the whole thing and puts a bow on it.
    A second problem is that vendors cobble together all sorts of things with string and rubber bands to participate in the IHE connect-a-thons and then go back home and continue to sell the same systems they have been selling for years which don’t comply to the IHE profiles.
    A third problem is that IHE only tells us about things that committees have gotten together and agreed upon.   That tells us about the past – but it doesn’t tell us about the future. One of the privileges I have from running a company like INTERFACEWARE is that I get a fantastic view into the future of medical IT with respect to integration. I get to talk to dozens of start ups that are thinking of new imaginative ways to leverage the data we have already today to improve healthcare and reduce costs. For those startups RESTful APIs are a lot more convenient.
    Incidentally in the US in the physician practice EMR space over 80% of integration is done using web based APIs – HL7 is more or less dead in that space already – it’s just a few older legacy applications that still use traditional HL7.


Friday, April 12, 2013

Why FHIR will burn CDA

An interesting post with this title from Eliot Muir a friend of this blog. (See, for example, "The Rise and Fall of the RIM" from 2011.)

As Eliot points out:

CDA/CCD documents amalgamate lots of data together. There are too many data points. That makes these documents extremely impractical for solving real world integration problems. They tend to be very brittle and difficult to accomodate changes. It’s one of many reasons why the cost associated with using CDA/CCDs for integration are so high with a low return on investment. I have seen the blood in the field.

FIHR, in contrast, has a much better design and is much more modular  (A nice overview is here.) "The FHIR train has left the station.  It’s picking up speed and no one, not I, not the HL7 organization, nor even Grahame Grieve who invented the thing can stop it.  The concepts behind FHIR will have a life of their own. This train is going to be disruptive.  If you run an organization in healthcare you need to know about it. Your only choice at this time is to get on the train, or step in front of it."

We remain sceptical. First, FIHR is owned by HL7. Second, FIHR is still based on the RIM -- which is what caused all the problems in the first place.


Thursday, March 21, 2013

Implementing Obamacare? “Impossible endeavor”


By Michael Barone March 18, 2013 | 3:51 pm

Will the government be able to implement Obamacare smoothly? An “impossible endeavor,” writes a reader who describes himself as “83 years young, married to a beautiful lady for 65 years, with a 54-year career in technology starting with punch cards in the Navy, retired from three major corporations at the director level, last position was with EDS working on Y2K project.” He goes on to list some of the things he believes need to be done, which I quote with his permission. I don’t know enough about this to make a judgment myself, but I have noticed over the years that the federal government has had problems procuring information technology.

RESOURCE REQUIREMENTS

Technical

o   Programmers

Depending on what languages are being used will require that skill set—I bet you a lot of that code is written in COBOL and the last time I have seen that skill set was when we worked on Y2K, and we had to bring them out of retirement.

o   System Analyst

Strong ability to work with subject matter specialist to develop systems requirements and document for the programmers.

o   Technical and Subject Matter Specialists

Writers to develop Standard Operating Procedures (SOP) for the end users (How many will that be???) and SOP’s for operations.

Computers

o   Will we need various types of computer equipment in order to test and migrate to production? Problem is we must likely don’t have that computer capacity in order to test followed by migration to production

o   Mainframes, Desktops and other devices that I am unaware of

o   Will vendors have to get involved with hardware/software packages, etc?

o   LANs

Interfaces (based on the GAO report-as follows)

o   IRS

o   HHS

o   TREASURY

o   INSURANCE COMPANIES

o   SSA

o   STATE EXCHANGES

o   CORPORATIONS

o   SMALL BUSINESS

o   MEDICARE

Having identified the above organizations, corporations and Insurance companies will have requirements to modify their computer systems in addition to interfacing with their respective report-to organizations.

And unless I missed it, what about the Doctors having to provide information for Obamacare?

Finally, there is so much more, but I did want to give you a flavor of the magnitude of this impossible endeavor.

Monday, March 04, 2013

Human Action in the Healthcare Domain



An essay by Barry Smith, Lowell Vizenor and Werner Ceusters on the current state of the RIM, entitled 


“Human Action in the Healthcare Domain: A Critical Analysis of HL7’s Reference Information Model” 

has now been published in a Festschrift for Ingvar Johansson. The essay is available online here.


Friday, February 22, 2013

The Weight of the Baby After 5 Years


Fans of “The Weight of the Baby”, which is certainly the funniest of all postings to the HL7 Watch blog, may be interested to note that our loyal friend Anonymous has posted a new comment:

Anonymous said...

That is one mind boggling exchange. WC has the patience of a god. DR is a religious fanatic. DR is into some powerful mojo. Every HL7 consultant team should include a witch doctor.
2/22/2013 2:25 PM 


Tuesday, January 29, 2013

Wednesday, January 16, 2013

An Ontologist's Guide to HL7

A talk under this heading was presented as part of the National Center for Biomedical Ontology seminar series. Slides and a recording of the talk are available here:

http://www.bioontology.org/ontologist's-guide-to-HL7


Wednesday, January 02, 2013

The RIM of Despond


Looking Back

In the more than 7 years since HL7 Watch was founded in 2005, we have drawn attention to a number of deep flaws in the structure of the RIM. Examples of such flaws are:

1. No coherent distinction between an observation and what it is on the side of the patient that has been observed.
See The Multiple Joys of HL7 V3

2. No coherent treatment of the relation between an act of observation and the statement, assertion, or datum (for instance a measurement result) that is the result of this act.
See The Weight of the Baby

3. No coherent distinction between an act and the record of an act. 
See Still An Incoherent Standard

4. No coherent way within the framework of the RIM to track how concerns (for example diseases) in a single patient evolve with time.
See Still no coherent way to track concerns

5. No coherent way of distinguishing between condition (an enduring entity) and act (an event taking place at a time). (References to the 'condition of the patient' do indeed appear in HL7 standards documents, but the term 'condition' is nowhere defined – because it cannot be defined in conformance to the RIM.)
See Diseases as dependent continuants

5. No coherent distinction between intentional acts (for instance ordering, prescribing) and events in general (for instance accidental falls, events within the interior of the patient's body).
See HL7 and SNOMED CT

6. No coherent way of dealing with what the RIM calls 'effective time'.
See: Still confusion after 14 years

We have repeatedly pointed to the ways in which these flaws cause problems in learnability and teachability,  and thus in usability and codability, of HL7 v3-based standards. We have also averted to the way in which these problems re-appear with each new generation of coders and business analysts charged with working with HL7 v3, as evidenced most recently by the email threads we cite below.

Where We Stand Today

HL7 v3 remains an incoherent standard. Indeed as we have shown in  a recent paper (in press), matters are in this respect getting worse. The paper, which is entitled "Human Action in the Healthcare Domain", reviews the recent 'Release 4" of the RIM ballot document ISO/HL7 21731:2011(E), referred to in what follows as ISO RIM Release 4. This doctrine contains a series of welcome attempts on the part of the HL7 community to add clarity to their earlier publications, and it also contains a number of attempts to add ontology-like components to the HL7 structure. Unfortunately, however, these new additions do not replace the earlier, incoherent portions of the RIM specifications.  Rather, they are simply added on to the existing formulations, with no attempt (as far as we can see) to secure any sort of logical consistency.

Because, astonishingly, the RIM's basic flaws have in this way been significantly magnified, the result will be further waves of instability in the v3 standards. Because the needed changes will be made by different specialist groups, new inconsistencies will arise, which will be resolved, where they are resolved at all, by ballot, rather than by logic. The prognosis for the future of HL7 is not good.

Matters are made still worse as a consequence of the inclusion of CDA/CCD – with their HL7 legacy elements – in the Meaningful Use standards. We can anticipate that the cries for help from within the HL7 and associated vendor communities will become even louder. First examples are already here:

From: [email protected] [mailto:[email protected]] On
Behalf Of Kumara Prathipati
Sent: Mon 12/24/2012 10:40 PM
Subject: CCD DESIGN - VERY COMPLEX

Hello,

I was trying to understand allergies section CCD examples. It is very interesting the way CCD was designed to state "No known drug allergies".  It is just mind boggling how complex and complicated the CCD design was.

Just to say  "Pt has no known drug allergies' takes 25 lines of CCD. This tells us how inefficient this system has grown to be till now. [25] lines to tell 1 sentence "Patient has no drug allergies" (http://motorcycleguy.blogspot.com/2012/03/how-to-say-no-med-allergies-in.html).

... It is beyond my comprehension how this whole process ended up this complex. (looks like too late to change direction). If it needs a PhD to understand CCD, time to think again and  commit to design a more simple system even if it takes a lot of effort..

Any one who is responsible to create a piece of software to generate CCD or import CCD has to spend his life time understanding. He has to attend many courses given by experts and spend thousands of dollars.

I looked at NCPODP XML representation of ERx which looks a 100 time more simple than CDD and any one can understand in a few hours (including myself who went thru it). Hats off to NCPDP experts who made it so easy any one can understand in less than 1 day.

Looks like the current CCD experts are unable (unwilling since it is too late) to make this a simple but equally effective system to meet 95% of requirements. Your are increasing the cost of creating the interoperability modules for EMR products and HIEs. Still the medical community is not using CCD on a wide scale which is a result of the complex system designed by experts. I do not believe it is not possible to design a system 1/2 as complex. I hear the same response "the requirements are complex and so the system is complex".  I disagree with this statement ...

If CCD becomes very complex and expensive to implement, it will become  a disaster in health care because the whole clincial data exchange depends on this.

I see endless discussions on the element . If it takes so much discussion, the elements are named vaguely and time to junk that element or rename it. Many elements have poor naming systems. It takes some times hours of Google search to understand the meaning of your CCD XML elements and attributes. XML is supposed to have self explanatory elements and attributes. This is missing in in many of elements.

Just to give an example of poor element/answer naming. When I see "code=ASERTION". Just makes no meaning  at all to me. Concept is vaguely explained after extensive Google search.

... I am a practicing physician for 30 years and documenting allergy information for 30 years , almost daily. It's unbelievable frustration to digest your CCD manuals.

Kumara Prathipati MD

---

More examples of such cries for help (and associated general confusion) are provided in the email threads below, which are taken from the HL7 Strucdoc Digest of December 29, 2012. For the sake of readability, I have removed repetition and some digressions, and associated emails on a single topic in chronological order. Notes in yellow are from myself ([BS]) and Bill Hogan ([WH]).

Thread 1: Confusions regarding Observation.Code and Observation.Value

From: [email protected] [mailto:[email protected]] On
Behalf Of Kumara Prathipati
Sent: Thursday, December 27, 2012 12:19 PM
Subject: Re: ALLERGY SECTION QUESTION

Brian,

To help many many thousands of coders and business analysts working in EHR, HIE companies we have to make this simple, simple and more simple.

I will  explain like this


  Observation/Code = question

  Observation/Value = answer


some times answer does not  require a question (then you can use nullflavor).  This happens when answer is self explanatory.

Examples


  Observation/code = manifestation

  Observation/value = skin rash


  Observation/Code = Temperature

  Observation/Value = 99.8


Every observation must have /code and /value


  Observation/Code = wound depth

  Observation/value = 2.2 cm


  Observation/Code = heart murmur

  Observation/value = absent


I can [give] a 1,000 examples applicable in health care. I see no need to explain in 10 sentences but need to give 20 examples. Then no one has to attend courses to understand. Any one can implement CCD/CDA.

For heavens sake, at least give lot of examples with various clinical situations. EXISTING SYSTEM IS TOO COMPLEX, COMPLICATED, CONFUSING, FRUSTRATING ...

Kumara

---

From: [email protected] [mailto:[email protected]] On
Behalf Of Brian Zvi Weiss
Sent: Thursday, December 27, 2012 4:44 PM
Subject: RE:
Kumara,

If I understand you correctly, the case you are making for the code/value of an observation being a question/answer with "code" always being present (or nullflavor),  is really more an argument about what SHOULD be the case in your view, rather than what IS the case. Correct?

The white paper from the Terminfo project talks to the role of code in the RIM as being "the action taken in making the observation".  It jumps through a lot of hoops to even justify "Body Weight" being a "code":

This example is not in line with strict interpretation of the formal RIM definition in which the Observation.code is the action taken to make the observation. However, it is a more familiar form in real-world clinical statements about many observations. A possible bridge between these two views is to regard the name of the property observed (i.e. "body weight") as implying the action to measure or observe that property.

So, the definition of "code" becomes "action of observing or the property observed" - and for situations where you don't have either of those, ASSERTION (not a nullflavor) is used for "code".

[BS:] This tells us how deep is the confusion in HL7 circles as to what is meant by 'code'.

I'm not saying those were good decisions or arguing with you that we wouldn't be better off with what you recommended below.

But I do think it's important that we keep separate:

1. "support" questions about how the standard is to be implemented (resolving ambiguity, establishing best practice, need for more examples, etc.).

2.  questions/challenges on the standard itself (that have to be addressed in future versions and other standards creation work)

I'm still not 100% clear if this listserv is the place for both of those agendas - I think it is.  Either way, it has to be clear to all when we are involved in a discussion about #1 and when about #2.

So, if we limit ourselves to #1 for a moment on this topic, I don't think your explanation works because it doesn't seem aligned with what the C-CDA spec requires.  The C-CDA spec is clear on where ASSERTION has to be used ... and other guidance on what values or value sets are legitimate for "code" in other templates.  ...

Brian

---

From: Mead Walker [mailto:[email protected]]
Sent: Thursday, December 27, 2012 19:36
Subject: RE: ALLERGY SECTION QUESTION

Hello Kumara,

I think your suggestion of illustrating points of possible confusion with examples is a great one.

However, it does seem one of your examples sits on the minority side of a much earlier debate about the use of code and value. Namely,


Observation/Code = heart murmur

Observation/value = absent


I think the more conventional approach would be:


Observation/Code = ASSERTION

Observation/value = heart murmur


By the way, I have always thought that one of the drivers behind this was
the desire to identify preferred code systems for observation code (LOINC)
and observation value (SNOMED and others (although hopefully only SNOMED to
some))

Mead

[BS] Compare the earlier debate in ontology circles as concerns an effective avenue for ensuring consistency as between 'Attribute' and 'Value', where some, for example, regard 'Color' as Attribute and 'Red' as Value, others regard 'Red' as Attribute, 'Dark' as Value. The solution proposed involves the imposition of a single hierarchy, whereby all values are seen as is_a children (subtypes) of the corresponding codes (values for codes at one level can be codes themselves for values at a lower level). Thus for example

Color
Red 
Dark

Manifestation
Skin rash manifestation
Severe skin rash manifestation

Temperature
        99.8 degree Celsius temperature 

Wound depth
        2.2 cm wound depth

Observation for potential heart murmur
Observation for potential heart murmur with result: negative

For the general idea, see the discussion of the EQ method in Nicole L. Washington, Melissa A. Haendel, Christopher J. Mungall, Michael Ashburner, Monte Westerfield, and Suzanna E. Lewis, "Linking Human Diseases to Animal Models Using Ontology-Based Phenotype Annotation", PLoS Biolology 2009 November; 7(11): e1000247. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2774506/

----

Thread 2: Assertions and Observations

From: [email protected] [mailto:[email protected]] On
Behalf Of Lisa Nelson
Sent: Friday, December 28, 2012 01:52
Subject: RE:

Brian,

I thank you for bringing up both this issue about the use of "Assertion" in the code element of an observation and the syntactical issue of the outer Problem Concern Act that can group several Problem Observations into a single Problem Concern.

I have been very concerned about the use of "Assertion" in the observation templates you have identified and several others too. (Check out some of the Problem Observation examples on page 442 of the C-CDA IG. They show the use of "Assertion" as the observation/code/@code even though the value set established for the code element does not include "Assertion" as one of the value set codes.)  Also, my experience testing CDA Documents for Connectathon has revealed that vendors do not adequately represent Problem Concerns in the narrative text of a Problem Section.  They are revealing a "structured representation" of the machine readable entries which does not show, in a humanly readable way, the relationship of the outer Problem Concern which wraps the Problem Observations.  Vendors just aren't getting this (in my opinion).

I think it is very important to examine the impact and rationale for using "Assertion" and further delve into the Problem Concern Act and what it's use implies for implementations. Now that we are beginning to get some real implementation experience, I think we should step back and see if we truly understand the impact of our earlier design choices, in order to confirm if they all still make sense or not.

... I think it is time we clearly understand what this decision implies for the practical use cases of the data.  I would not be at all surprised if something that we thought made sense a couple of years ago, turns out not to be a good idea, now that we see the implications for implementation.  I think this topic about the use of assertion and the other topic about the use of the outer Concern Act need to be carefully scrutinized to make sure our implementation guidance makes sense in light of what we are envisioning for quality measures which are highly dependent on being able to identify problems in a patient's record. Now that we have  a clearer picture about how Quality Measures are specified (HQMF) and how patient-level quality documents are created (QRDA), I think we need to make sure that our implementation guidance for recording problems, lines up with the envisioned future uses of the information. ...

Regards,

Lisa

----

From: [email protected] [mailto:[email protected]] On
Behalf Of Brian Zvi Weiss
Sent: Friday, December 28, 2012 5:05 AM
Subject: Considering Changes in ASSERTION and nesting of Problem and Allergy Concern Acts

Thanks, Lisa.  ... I think we have to be very careful here when it comes to how we approach evolving the standard, now that it is baked into MU2 and the "train has left the station" on that.

In my thread yesterday with Bob re. my expectations for active "disambiguation"  of the spec by HL7, my assumption was that we were looking to create additional constraints that would further refine the ones we already have in order  to eliminate ambiguity in the implementation of those existing constraints - but without changing the existing constraints.

But here you are (as Kumara was earlier) referring to potentially substantively changing the spec.  I'm not arguing the correctness of your recommendations - they make sense.  But as in my earlier comments to Kumara, I think we need to be very careful to keep separate the issue of "how the spec should change/evolve" from "support infrastructure".  Really there are three categories here:

1)  Support infrastructure for answering questions that don't have implications for changes in the spec

2) Resolution of ambiguities via additional constraints (what Josh terms "best practices") being added to the spec (initially as "best practice guidance" and then working their way into the next release of the spec)

3) Evolution of the spec itself to change how things are done (creating new constraints that contradict the previous ones)

... In #1, I think the key issue is to what level HL7 wants to be involved here, the business model for funding it (e.g. membership only), the resulting SLA and infrastructure, etc.

 In #2, I think the key issues are:

A.  What is the right way to handle the process in both an authoritative and timely way, given that the cycle for full revisions of a spec and all the associated process for attaining consensus via balloting, etc. is too slow and we can't leave so many fundamental ambiguities out there for the market to sort out one pairwise integration at a time.

B. What is the commitment level of HL7 to focusing on this agenda rather than just "moving on" to the next version of the spec, other standards, etc. and the infrastructure for making it work

In #3 I think the key issue is what the rules of the road are once the train has left the station, as I noted above.  MU2 rules are working their way into certification testing infrastructure and are actively being worked on by vendors who have high levels of pressure and urgency to get their products "MU2 certified" quickly.  The implications of changing up the rules on them midstream is worrying.

... Of course at the end of the day, as you noted, we shouldn't have to live forever with a significant mistake (from a practical implementation perspective) once we've identified it.  Just saying it's tougher to navigate the whole issue of backwards compatibility when a particular release takes on a life of its own as part of something like MU.  I think it would be a something of a nightmare if the latest C-CDA spec was not consistent with what was being tested (or planned to be tested soon) for the latest (or upcoming) MU certification round.

So I would caution us re. the "enemy of 'good enough' being 'perfect' ".  As long as we disambiguate and provide enough examples, the market will manage through the stuff that has us now scratching our head and asking "how did we end up with this strange construct".  Not ideal, but probably not tragic - as long as there is a clear, single, right way to create/interpret the information and it is possible to get in the data into the document.  The time may have passed for "doing it better".

---

From: Lisa Nelson [mailto:[email protected]]
Sent: Friday, December 28, 2012 6:08 AM
Subject: RE: Considering Changes in ASSERTION and nesting of Problem and
Allergy Concern Acts

Brian,

... I agree with the sentiment that you have expressed. We need to find a way to work on this plane while we are flying it.  ...

I believe we can do it.  We need to do the disambiguation, develop the examples, and then release clarified guidance (which could involve adding some additional constraints) in a way that doesn't break everything we have already put in place.  I think that template versioning plays a key role in developing the ability to do this, which is why I'm focusing there first. Once we have clearly defined how to do versioning, then we will have the mechanism to release, for example, a new Problem Observation  or Problem Concern Act template which is a new version of the existing template but includes the revisions which we determine will add the needed clarity without breaking our prior constraint assumptions.  It is a tricky puzzle to solve, but I'm certain it can be done.  ...

Lisa

Lisa R. Nelson, MS, MBA | Consultant | Life Over Time Solutons | cell:
401.219.1165 | Westerly, RI | [email protected]

---

From: [email protected] [mailto:[email protected]] On Behalf Of W. Ted Klein
Sent: Friday, December 28, 2012 8:54 AM
Subject: Re: HITSP value set stewardship

... I am concerned because questions about Quality Measures keep coming up around the vocabulary, and I don't know of any authoritative sources of truth for the sets of codes to be used, and in some cases even the identifier of the value set to be used.  This whole thing resembles some kind of hot potato that no one wants to hold on to for very long.

Ted

----

From James T. Case
James T. Case MS, DVM, PhD, FHL7, FACMI
Health Program Specialist, SNOMED CT
National Library of Medicine, National Institutes of Health
On Dec 28, 2012, at 10:35 AM, "Case, James (NIH/NLM) [E]" wrote:

Ted,

As far as I am aware, all of the value sets that were identified for the approved eMeasures for MU stage 2 are available from VSAC (https://vsac.nlm.nih.gov).  You would be well advised to point those that have vocabulary questions to that site.  The VSAC is the â€Å“source of truth†for all of these measure value sets.

Jim

---

From: [email protected] [mailto:[email protected]] On
Behalf Of Brian Zvi Weiss
Sent: Thursday, December 27, 2012 12:41 PM
Subject: RE: ALLERGY SECTION QUESTION

Josh,

...  Sounds pretty compelling - would be curious if anyone on this list wants to make the case for a counter-argument that multiple observations within a concern act (allergies and/or problems) should be used?

Does the best practice place any value on the concern act at all?   As per Gaby's note, the effectiveTime data range in the concern act adds no value if C-CDA says it has to be the same as that in the observation (BTW, where does it say that? I tried to find that in the C-CDA IG but didn't see it?). The concern act status doesn't seem to add value, only confusion if it contradicts the observation status.

So, is the best practice just to consider the concern act "wrapper overhead" and create it to spec when creating the C-CDA and ignore it on interpretation of a received C-CDA?

Brian

---

Thread 3: Continuing Confusions about Effective Time

From: Brian Zvi Weiss [mailto:[email protected]]
Sent: Thursday, December 27, 2012 1:06 PM
Subject: RE: ALLERGY SECTION QUESTION

... So, it sounds like there isn't consensus on the best practice here that Josh is recommending (limiting concern to a single observation).  Various questions in my mind (like how the example you gave would work given the limitation Gaby and Josh noted on the effectiveTime in the concern and the observations - though also not sure where in the IG it says that) but I'm out of my depth here.  My boundary ends with "understanding what is in the standard" (trying to do that) not discussing "what should be in the standard".  So, I'll leave that to you, Josh, and others.

As always, I would just encourage us to not leave this hanging and try to come to some kind of authoritative guidance.  This is another example of  where the spec alone isn't enough (as "the whole problem list in a single concern" is syntactically valid, there is debate on the best practice recommendation, etc.).  I'm happy to assist in writing up the conclusions. But can't help in deciding what that conclusion should be.

Brian

---

From: Bob Dolin [mailto:[email protected]]
Sent: Thursday, December 27, 2012 22:49
Subject: RE: ALLERGY SECTION QUESTION

Hi Brian,

... Think of the concern act as corresponding to an item on a problem list. Pretty much every EHR I've seen allows you to make sequential updates to a problem - e.g. today you might call it "chest pain", next week, after further study, you might update it to "esophagitis". I acknowledge that more guidance would help.

To Josh's point, the rationale for multiple observations in a Concern wasn't to allow you to put the whole problem list in a single Concern, but rather to allow you to track the course of a problem over time.

Bob

---

From: Bob Dolin [mailto:[email protected]]
Sent: Thursday, December 27, 2012 23:26
Subject: RE: ALLERGY SECTION QUESTION

Hi Brian,

Where a concern has multiple observations - consider an EHR, where a clinician updates an item on the problem list, then updates that item again at a later date. Typically, the most recent observation would be displayed by the EHR, with the other observations retained for historic reference.

As for "authoritative guidance" - this is tricky. Imagine for instance, we create a standard that has an ambiguity (I interpret it one way, you interpret it another way). We then issue "authoritative guidance" that says to do it the way you've interpreted it. Would you then find instances based on the way I interpret it to be non-conformant? Historically, Structured Documents has issued "internal working documents" [http://wiki.hl7.org/index.php?title=Structured_Documents_Internal_Working_Documents]. ...

Bob

---

Subject: effectiveTime in Problem Concern Act and nested Problem Observations (and effectiveTime in Allergy Concern Act)
From: "Brian Zvi Weiss"
Date: Fri, 28 Dec 2012 10:18:55 +0200

Bob,

Where a concern has multiple observations - consider an EHR, where a clinician updates an item on the problem list, then updates that item again at a later date. Typically, the most recent observation would be displayed by the EHR, with the other observations retained for historic reference.

Can you explain how effectiveTime should be used in the problem concern act you described (item on the problem list updated several times and the other observations retained for historic reference)?  Ideal would be an example C-CDA snippet demonstrating this.
 


[WH]  Here we see ongoing and persistent confusion about what effectiveTime is for.


From what Josh and Gaby wrote there seems to be an understanding that the effectiveTime should be the same for all observations in the same act and for the act itself.  I can't yet figure out where it says this in the C-CDA spec - I think Josh indicated this was implied by the guidance to use "onset date" for the lower bound of the effectiveTime and Gaby seemed to suggest it was an explicit requirement that the act and observation effectiveTime match.  All I saw for Problems was the following in the Problem Act:

The effectiveTime element records the starting and ending times during which the concern was active on the Problem List.

And the following for the problem observation:

This field [low] represents the onset date.  This field [high] represents the resolution date.  If the problem is known to be resolved, but the date of resolution is not known, then the high element SHALL be present, and the nullFlavor attribute SHALL be set to 'UNK'. Therefore, the existence of a high element within a problem does indicate that the problem has been resolved.

But assuming Gaby and Josh are correct, those constraints they indicate don't seem consistent with the scenario you are describing whereby the whole point of having multiple observations in an act is to track the historical evolution of the concern as the observations change?

I'm trying to get to the bottom of this. a concrete example would help out a lot here!

Once that is clear, I'd also like to understand the difference in how effectiveTime works in Problems and in Allergies.  The guidance on the Allergy Concern Act is:

If statusCode="active" Active, then effectiveTime SHALL contain [1..1] low.
If statusCode="completed" Completed, then effectiveTime SHALL contain [1..1]
high

and there is no effectiveTime on the allergy problem observation(s) contained inside the allergy concern act.

Brian

[BS] From the ISO RIM Release 4 document we learn that  


Act.effectiveTime =Def. The clinically or operationally relevant time of an act, exclusive of administrative activity.

In the associated Usage Notes we are told that 'The effectiveTime is also known as ... the "biologically relevant time" (HL7 v2.x).' The first example provided is then: 'For clinical Observations, the effectiveTime is the time at which the observation holds (is effective) for the patient.'  Unfortunately, the effectiveTime in this example is not the 'relevant time of an act', rather it is the relevant time of a condition on the side of the patient. For  a clinical observation such as 'staph aureus infection detected' the effectiveTime (as "biologically relevant time") would start when the corresponding condition first begins to exist in the patient. But because the RIM has no place for conditions in the patient, but rather only for observations of conditions, the RIM has no means to formulate this understanding clearly. The result are the repeated flare-ups on HL7 discussion lists where ever new parties complain that they do not understand what is meant by 'effectiveTime'.

As the reader can easily ascertain, the further examples of correct usage of 'effectiveTime' in the ISO Release 4 document do not, unfortunately, resolve the confusion.

---

From: [email protected] [mailto:[email protected]] On
Behalf Of Brian Zvi Weiss
Sent: Friday, December 28, 2012 10:19
Subject: effectiveTime in Problem Concern Act and nested Problem
Observations (and effectiveTime in Allergy Concern Act)

Bob,

Where a concern has multiple observations - consider an EHR, where a clinician updates an item on the problem list, then updates that item again at a later date. Typically, the most recent observation would be displayed by the EHR, with the other observations retained for historic reference.

Can you explain how effectiveTime should be used in the problem concern act you described (item on the problem list updated several times and the other observations retained for historic reference)?  Ideal would be an example C-CDA snippet demonstrating this.

---

Subject: statusCode in Problem Concern Act and nested Problem Observations (and Allergy Concern Act / Observation)
From: "Brian Zvi Weiss" Date: Fri, 28 Dec 2012 11:26:56 +0200

Bob,

....  In this mail I want to focus on status values.  Again, let's start with Problems and then we'll go to Allergies.

The Problem Concern Act has a status code where the value set is listed as 2.16.840.1.113883.11.20.9.19 (ProblemAct statusCode) - which means the following choice of values (as per Table 124: ProblemAct statusCode Value Set): active, suspended, aborted, completed.

The Problem Observation status code comes from the same code system as above
(2.16.840.1.113883.5.14 HL7 ActStatus) and is set to a fixed value of "completed".  Nested inside the Problem Observation is (optionally, 0..1) a single Problem Status, whose value attribute comes from the value set HITSPProblemStatus 2.16.840.1.113883.3.88.12.80.68 which is part of SNOMED.  The values there are: active, inactive, resolved.

So.

1)      What is the precise meaning of the status in the status code of the Concern Act?

2)      What is the precise meaning of the status in the Problem Status value inside the Problem Observation(s) inside the Concern Act?

3)      What rules, if any, govern the relationship between the status of the Act and that of the Observation it contains (in the case of a single Observation)?

4)      What rules, if any, govern the relationship between the status in the Problem Status value inside the Problem Observations when there are multiple Observations inside a single Concern Act?  Does the case of multiple Observations change the previous answer in #3 re. relationship of status in Concern Act and status in Observations (plural)?

5)      How are status changes of an Observation managed within a Concern Act?  Are you supposed to have multiple Observations indicating the evolution of the Observation within the Concern, or just replace status of the Observation with the new status?

In terms of Allergies, the status codes match up with Problems - the concern act uses a status from 2.16.840.1.113883.11.20.9.19 (ProblemAct statusCode) and the Allergy Intolerance Observation has nested in it (optionally, [0..1]) an Allergy Status Observation whose value attribute comes from 2.16.840.1.113883.3.88.12.80.68 (HITSPProblemStatus).  So, hopefully the answers above are directly applicable to Allergy Concerns/Observations as well.

Brian

----

Subject: RE: strucdoc digest: December 28, 2012
From: William Goossen Date: Sat, 29 Dec 2012 23:32:55 +0100

Brian,

The concern act would get a time for its creation. An observation of an onset would get its own time, which could be different from the concern, onset can be of earlier date. It is well explained in the care provision ___domain care structures topic. Unfortunately you will have to go back for the details to Sept 2009 dstu ballot

Vriendelijke groet,

William Goossen

---


Thursday, October 11, 2012

CDISC Discovers "Concept Maps"

My attention was drawn to interesting developments on the CDISC front by the following, from Kerstin Forsberg of AstraZeneca:

Checkout "mind maps" slide 13-15, 27 for#CDISC SHARE http://ow.ly/dHfy5  < Then have a look at http://code.google.com/p/ogms/

The CDISC slide deck she refers to does indeed contain interesting representations of, for example, deep brain simulation as a treatment of Parkinson's Disease (slide 14):




But the question arises: how are the nodes and edges in such a graph to be interpreted. On slide 14 we are told that:  
[the b]asic building block is a “concept” which is a piece of clinical information. Examples include:
–systolic blood pressure observation
–systolic blood pressure result
–sodium concentration in plasma observation
–subject's birth weight result
–study subject
–visit
Each of these concepts has component parts (including what we would conventionally call variables)
The 'treatment' and 'Parkinson's Disease' nodes in the above, therefore, represent pieces of clinical information. Given the context in which slide 14 appears, I assume that these are pieces of clinical information about some given patient. But how, on this basis, can we provide a coherent interpretation of the edges on the graph. How, for example, can 'implanted in' be understand as linking a piece of information about this patient's brain with a piece of information about some lead?

Questions such as this were addressed (and I had optimistically assumed) put to rest already in 2006. The answer to such questions is that there is no coherent interpretation of the edges in a graph of the sort displayed if the graph is taken to be about relations between concepts. The Ontology for General Medical Science (OGMS) shows, I believe, how to create and interpret such graphs in a coherent fashion -- paying careful attention to the distinction between portions of reality on the side of the patient -- for example actions of treating, brains, neurostimulators -- and the types of which these portions of reality are instances.

OGMS is built on the basis of the assumption that each term and each relation used in the graph representation of a clinical encounter needs to be defined in a logical way. Only thus can the information contained in the graph serve computational inference. It is sad that, after so many years, important groups investing considerable efforts in healthcare informatics have still not apprehended the need for such definitions.

Update: October 18, 2012

XML4Pharma submitted the following query:

Not being a specialist in ontologies, I need some more explanation.
Do you mean that for each node, the edges (predicates I presume in the RDF context) cannot be defined in a unique way, i.e. there is an infinite number of possibilities for each edge to be named? E.g.
1. What one person defines as "is part of" can be defined by another as "is component of". Is that the problem?
2. Or should there for each pair of nodes be only a distinct set of predicates available?
3. Or do you mean that to assign a name to an edge, some systematic rules must be followed?
Ad 1. Currently, the standards defined for RDF, as for OWL (as for XML), place very few restrictions on what relations and what sorts of relations can be used to link nodes in an ontology graph, and they place no restrictions at all on what such relations should be called. This leads to the same sort of chaos as would be created if diferent airlines used different standards for representing time in publishing their schedules. In "Relations in Biomedical Ontology" we suggested a solution to this problem, and this solution has been applied and refined within the framework of the OBO Foundry.

Ad 2. I believe that, for each pair of nodes, only a small set of relations will be meaningfully applicable.

Ad 3. As stated under 1., we need standards which will ensure that the same relations receive the same names on all occasions of use. OGMS is an attempt to set forth the needed standards for annotating data relating to clinical encounters.

Thursday, September 06, 2012

HL7 To Make Intellectual Property Available at No Cost

On Tuesday, HL7 International announced that, beginning in 2013, it will make most of its intellectual property, including standards, available at no cost under licensing agreements. 

For more details, see HealthcareInformatics, 9/4, which refers to:
a produced statement [from] John Halamka, M.D., MS, CIO at the Beth Israel Deaconess Medical Center and Professor of Medicine at Harvard Medical School: “This announcement is the most significant standards development in the past decade ... It ensures that every stakeholder will have ready access to the content standards they need for Meaningful Use.”
This statement may, however, be a mite misleading. As I understand matters, those wishing to satisfy Meaningful Use criteria  can use CCR from ASTM instead of CCD from the HL7 shop (and even CCD is now a chimera of CCR/HL7 CDA). HL7 seems to have been unresponsive to actual needs, to the degree that it was surprised and embarrassed by CCR.

Friday, August 03, 2012

Re-fur-bish (verb): To bish, again, with fur


Rene Spronk, in his new proposal to "Renovate HL7 version 3" recognizes that HL7 v3 has failed. As he points out, almost all implementations of v3 today are not in keeping with the original intentions of the v3 developers.  “I've been working on the HL7 version 3 standard for about 10 years - but based on experiences gained during consultancy projects with implementers, and based on experiences of implementers …, I see no other way forward for HL7 v3 then to embark on a renovation project.”

'Renovation', according to the dictionary means: to restore to an earlier, better state. For Rene, to renovate means:  to optimize "for the kind of HL7 v3 implementations that we actually see in use today".  He proposes three alternative paths to such renovation:
Proposal 1.  involves adding a new layer of "re-usable elements" to HL7 v3 and "getting rid of the 100s of different R-MIMs".  
But where will these new re-usable elements come from? And who, given the failures thus far, will trust a methodology based on the RIM to yield elements which will actually be reused?

More interesting are the proposed second and third alternative paths:
Proposal 2. is to renovate HL7 v3 by "moving everything to CDA R3, and to cease development related to HL7 v3 messaging."
Propose 3. is to renovate by "moving everything to FIHR [= Fast Healthcare Interoperability Resources] and to cease development of HL7 v3".
Thus in both cases ‘renovation’ in the sense of: abandonment. 

Everything in the FHIR (pronounced ‘Fire’) is, to be sure, required to have a ‘mapping to the RIM’. But this, we presume, is simply for reasons of nostalgia ... for earlier, better times.

Sunday, June 24, 2012

Death by HIPAA

From INFO/LAWhttp://blogs.law.harvard.edu/infolaw/2012/06/22/death-by-hipaa/:

Vioxx, the non-steroidal anti-inflammatory drug once prescribed for arthritis, was on the market for over five years before it was withdrawn from the market in 2004. Though a group of small-scale studies had found a correlation between Vioxx and increased risk of heart attack, the FDA did not have convincing evidence until it completed its own analysis of 1.4 million Kaiser Permanente HMO members.  By the time Vioxx was pulled, it had caused between 88,000 and 139,000 unnecessary heart attacks, and27,000-55,000 avoidable deaths.
The Vioxx debacle is a haunting illustration of the importance of large-scale data research….If researchers had had access to 7 million longitudinal patient record, a statistically significant relationship between Vioxx and heart attack would have been revealed in under three years. If researchers had had access to 100 million longitudinal patient records, the relationship would have been discovered in just three months….


Monday, March 12, 2012

HL7 RIM has done its job on HL7 V3. Now it’s time to undermine the V2 standard

I am noticing changes introduced in V2.7 of the HL7 standard – described by one Asian HL7 expert as ‘subtle, insidious and dangerous’ – which consist in the insertion of failed RIM strategies into the newer versions of V2.x. Simple, usable standards are thereby becoming progressively more complex. Mood Codes, for example, the source of so much of what is impenetrable in HL7 V3, are now being reverse-engineered into V2.7, even while the V2.7 documentation acknowledges that At this time, there are no documented use cases for this field. And the CWE ('coded with exceptions') data type, which in V2.6 had 9 components, has 22 components in V2.7 along with multiple new subcomponents.
While, therefore, it is being widely acknowledged on the one hand that HL7 V3 has failed, V3's defenders are on the other hand injecting into V2.x some of the very components of the V3 approach which had made it non-viable for messaging.

On Mood Codes in V2.7

For examples of insertions of Mood Codes in V2.7, see chapter 2, section 2.8.1 and Table 0725, and chapter 7, section 7.4.2 (OBX - component 22 - CNE - Mood code):
Definition: This field identifies the actuality of the observation (e.g., intent, request, promise, event). Refer to HL7 Table 0725 - Mood Codes for valid values. This field may only be used with new trigger events and new messages from v2.6 onward.
Note: OBX-22 Mood Code was introduced in v2.6 to support Patient Care messaging concepts and constructs. At this time, there are no documented use cases for this field in the context messages as described in this chapter. This statement does not preclude the use of OBX-22, but implementers should exercise caution in using this field outside of the Patient Care context until appropriate use cases are established. While a similar note exists for OBX-21 Observation Instance Identifier, particular care should be taken with OBX-22 as this could modify the intent of the segment/message and create backward compatibility problems.
5. V 2.7 - Chap 11.2.1.3 "The use of HL7 Version 2.x in clinical messaging has involved the use of segments in ways for which they were not originally intended, as well as the development of the REL segment to express important relationships between clinical data components. Such use has also necessitated the introduction of mood codes to allow for the richer representation of intent, purpose, timing, and other event contingencies that such concepts required. 
On CWE (coded with exceptions) in V2.6 and V2.7
In V2.6, CWE had 9 components as follows: 

SEQ
LEN
DT
OPT
TBL#
COMPONENT NAME
SEC.REF.
1
20
ST
O

Identifier
2.A.74
2
199
ST
O

Text
2.A.74
3
20
ID
O
0396
Name of Coding System
2.A.35
4
20
ST
O

Alternate Identifier
2.A.74
5
199
ST
O

Alternate Text
2.A.74
6
20
ID
O
0396
Name of Alternate Coding System
2.A.35
7
10
ST
C

Coding System Version ID
2.A.74
8
10
ST
O

Alternate Coding System Version ID
2.A.74
9
199
ST
O

Original Text
2.A.74


In V2.7, however, this has been expanded to 22 components, which are made even more complicated by the introduction of Value Sets and OIDs (unique object identifiers).
As of v2.7 a third tuple, formerly known as triplet, has been added to the CWE data type. Additionally, 3 new components were added to each tuple such that each tuple now has a total of 7 components.

So each set in V2.6 receives an additional OID (SEQ 14) and a value set OID (SEQ 15) . And this set of 6 is repeated 3 times.

HL7 Component Table - CWE – Coded with Exceptions

SEQ
LEN
C.LEN
DT
OPT
TBL#
COMPONENT NAME
1

20=
ST
O

Identifier
2

199#
ST
O

Text
3
1..12

ID
C
Name of Coding System
4

20=
ST
O

Alternate Identifier
5

199#
ST
O

Alternate Text
6
1..12

ID
C
Name of Alternate Coding System
7

10=
ST
C

Coding System Version ID
8

10=
ST
O

Alternate Coding System Version ID
9

199#
ST
O

Original Text
10

20=
ST
O

Second Alternate Identifier
11

199#
ST
O

Second Alternate Text
12
1..12

ID
C
Name of Second Alternate Coding System
13

10=
ST
O

Second Alternate Coding System Version ID
14

199=
ST
C

Coding System OID
15

199=
ST
O

Value Set OID
16

8=
DTM
C

Value Set Version ID
17

199=
ST
C

Alternate Coding System OID
18

199=
ST
O

Alternate Value Set OID
19

8=
DTM
C

Alternate Value Set Version ID
20

199=
ST
C

Second Alternate Coding System OID
21

199=
ST
O

Second Alternate Value Set OID
22

8=
DTM
C

Second Alternate Value Set Version ID