The patient level indicator reporting project works closely with the analytics engine work with the aim of being able to query and stream patient level data from the the openmrs instance through the analytics engine via an openHIM mediator to a shared health record ie a HAPI FHIR JPA server that client systems interacts with the right credentials can query.
The tools used in the analytics engine.
The a number of tools used in the analytics engine through various streaming modes to query and stream data from openmrs through a mediator to a shared health record as explained below:
Streaming mode (Atom Feed).
Under atomfeed streaming app we mainly rely on the atomfeed module for openmrs which works in conjunction with the openmrs Event module to read event changes in the openmrs database.
The standalone atomfeed streaming app in the analytics engine mainly listens to feeds generated by the atom feed module of OpenMRS and extracts the the resource uuid from the feed and fetch the corresponding FHIR resources that are exported to GCP FHIR store or a shared health record,it can also be configured to emit rest objects and resources. Note that this can only listen to changes made with in the Application.The openmrs FHIR2 module helps in generating for implementing a FHIR interface for OpenMR.
The URLs for FHIR resources of this module have the form
http://localhost:9016/openmrs/ws/fhir2/R4/Patient. Therefor we need to update the Atom Feed module config to produce these URLs. To do this, from the OpenMRS Ref. App home page, choose the "Atomfeed" option (this should appear once the Atom Feed module is installed) and click on "Load Configuration". From the top menu, choose the file provided in this repository at
utils/fhir2_atom_feed_config.json and click "Import".
We will also need to set up the Atomfeed client requires a database to store failed event and marker information, and we’ll use MySQL for this purpose. If you don’t have an available MySQL service, you can start one up using docker:
docker run -e "MYSQL_ROOT_PASSWORD=root" -p 127.0.0.1:3306:3306 --name=atomfeed-db -d mysql/mysql-server:latest
Now you should have MySQL running on the default port 3306, and can run:
mysql --user=USER --password=PASSWORD < utils/dbdump/create_db.sql
This will create a database called
atomfeed_client with required tables (the
USER should have permission to create databases). If you want to change the default database name
atomfeed_client, you can edit
utils/dbdump/create_db.sql but then you need to change the database name in
After you have done the above then fire up a local openmrs instance locally or using the its docker image and also a hapi fhir JPA server as instructed in the read-me file .Then run the analytics engine atomfeed streaming with the respective command-line arguments .
Streaming mode (Debezium).
Under this streaming we mainly use the embedded debezium mysql connector camel component.The Debezium MySQL component is wrapper around debezium using Debezium Embedded, which enables Change Data Capture from MySQL database using debezium without the need for Kafka or Kafka Connect.
Debezium is a change data capture framework used for track database changes in tables and schemas like creating a patient,updating or deleting a patient in the DB. You simply start start up debezium and point it to the database and it will start reading the bin logs when change events happen in the database ,so that other applications can consume and respond appropriately by consuming these events. This has an advantage over the Streaming-Atomfeed that it can capture changes made directly to the DB outside of the OpenMRS Application
Apache camel is an open source framework used to implement enterprise integration patterns for integrating various systems.It allows end users to integrate various systems using the same API, providing support for multiple protocols and data types, while being extensible and allowing the introduction of custom protocols.
How it all works.
The embedded debezium mysql connector listens to changes in the openmrs DB with the required configurations by reading the mysql servers bin-log and then configures a route from the configure() method overridden from RouteBuilder class of camel then process with a processor which converts the events into their corresponding FHIR resources. This is done in the DebeziumListener class.
We also have the FhirConverter that converts the debezium events into FHIR resources and in this case acts as our processor . Finally the Runner class that contains the main method for running the pipeline with the passed in command line arguments.