New Natural Language Processing tools available on VINCI
Department of Veteran Affairs
VA Informatics and Computing Infrastructure
Ryan Cornia
Qing Zeng
New Natural Language Processing tools available on VINCI
VINCI-043012
Moderator: Presenting for us today we do have Ryan Cornia and possibly Qing Treitler Zeng, who will be able to join him later on in the presentation. And in this seminar we will be describing, and they will be describing a tool that facilitates the utilization of VA clinical text notes.
The tool, called V3NLP, is a full function, natural language process platform. It can be used to extract a variety of target clinical concepts, for example diagnosis or ejection fraction from clinical notes.
As I said our primary presenter today is Ryan Cornia. He is the project lead applications, systems and programming for VINCI. He is located in Salt Lake City at the VA there and also is affiliated with University of Utah.
And if Qing is available to join us she is the PI for the information extraction for [chair] and NLP, and also product owner of VINCI, also located in Salt Lake City. So without further ado I would like to turn it over to Ryan at this time. Are you available to share your screen now, Ryan?
Ryan Cornia: Yes.
Moderator: Okay. I’m going to turn it over to you and you should see a popup. Go ahead and accept that, great, coming through.
Ryan Cornia: Great. And Qing is here so I’m going to turn it over to Qing to do the introduction and get us started.
Qing Zeng: Hi. I’m Qing Zeng. So today we are going to talk to you about this natural language processing system, which is also what we do as a prototype NLP ecosystem. I’m going to get started here.
First we will start with a question. As some of you know there are over one billion clinical notes in the VINCI repository, actually 1.8 billion exactly. How would you like to use them and which types of notes would you be interested in? If you can take a minute, give us some of your thoughts that will help us, guide us demo to you.
Moderator: Thank you, Qing. So for everyone looking how to submit your answers, since this is an open-ended question I would let’s just like you to submit your answer using the question function. So go ahead and open your question section located on the right-hand side of your screen on the Go to Webinar dashboard and write in any way that you would like to see how the VINCI repository could be used.
So there are one billion clinical notes in the VINCI repository. How would you like to use them? Which type of notes?
We do have a quite a few responses coming in, pathology notes, health factors, delivery of evidence-based therapies and mental health, radiology reports, et cetera, et cetera. As there are so many responses we will not be reading through all of them at this time, but I do encourage you to write in your response as Qing and Ryan will be reviewing this data after the session.
So although we can’t all the results now they are taken into consideration and do appreciate your input. And we’ll give everybody about maybe another minute or so. They are coming through quite rapidly.
Okay. Thank you to all our respondents and you can go ahead and advance to the next slide now.
Qing Zeng: And we’ll ask another question. If you’re interested in text notes some of you may have already used certain tools to process text clinical notes. Can you tell us a little bit of what have I used so far? For example, some of the things you might be familiar with would be Metamap, MEDLEE, C Text, ARC, High Text.
If you haven’t heard of any of that some of you who are really text style you may have heard about UIMA and GATE. Or I know a lot of people have used various regular expression search capabilities to process text. So anything that you have used please let us know. That will help guide our development in the future.
As I will be showing you later this is—although this on one hand you can view it as a classic natural language processing system, on the other hand it is also a NLP ecosystem. It’s designed to host and house heterogeneous tools
Moderator: Thank you, Qing. Just so everybody is aware I do see answers coming in. Thank you very much. We do have a few for them to review, Monarch, Atlas, TI for qualitative data. Some people have not used any, Perl, GATE, SaaS, ARC, Accelerator. So thank you so much for writing all these responses. As I said they will be reviewed.
Just a quick note, when you are going to type in any questions or comments during the rest of the presentation, please do spell out any acronyms that a non-expert might not be familiar with. I will be moderating the questions and I am not an expert in natural language processing. So in order for your question to come through clearly please write it out clearly for me. I appreciate that in advance.
And I think we’re ready to move on to the next slide. Thank you.
Qing Zeng: Great. So thank you for answering the questions. Now we will tell you about this tool that we have built. So V3NLP is a system that intends to facilitate end-user’s use of text processing functions.
And that’s important because I hear some of you say SaaS, Perl, whatever you mentioned the majority of you haven’t used a more sophisticated natural language processing software that is built on top of say the UIMA or GATE, or just for other programs like Metamap that’s developed from scratch, originally using Prolog.
So there is a challenge for end user to use text processing functions. There is a learning curve. So this system offers [go use] to put these tools in your hands to use.
And the second goal we had is this tool wants to provide a framework that supports heterogeneous NLP modules. And the few NLP modules I heard that people have used you can hear it’s not all based on UIMA or not all based on GATE, were not even all written in Java.
Pro actually has very powerful text processing functions. It’s very user friendly, but it’s not often that compatible with any of the things we have mentioned, for example the UIMA, the GATE and the Metamap work [badly].
And also one thing we had in mind in designing this framework consistent is the big text processing needs. This is important because traditionally NLP system is not known to be fast. It’s a relatively slow process computationally.
Although today we won’t show you the fact processing aspects we just want to mention that once you go to the user interface we do have in mind in order to run through millions. Even if it’s just hundreds of thousands, not to say millions or even 1.8 billion notes, we have some capability built in there to facilitate the ultra fast text processing in the batch mode, which Ryan will show you.
So this V3NLP system has three components. One is user interface, which is designed by Ryan, who will show the system, and has a backend framework which is a collaboration between several members of our team, including Guy Divita, Tom Ginter, and Ryan Cornia, and Scott DuVall As well as Doug Redd.
And also we have repository processing modules. These modules came from a number of open source, known open source clinical text processing systems including High Text, C Text, Metamap and we also have some modules we specifically developed for the VA text. And those will show some of that to you also in a demo.
So first I want to show you one piece of sample text. Some of you are probably very familiar with this clinical note. Some of you may not, but this is actually fairly typical. I say typical because typical as in there are so many different types of clinical notes.
This is one for your example of clinical notes. As you can see there is a lot of information in there. There’s diagnosis, medication, lab orders, treatment options, follow-up plans. And some of them may complement the existing coded data.
Numerous studies including our own studies have shown natural language processing systems can extract information that complement the ICD9 codes in new medications are either more comprehensive and/or more accurate than ICD9 codes. For example just take the diagnosis alone.
And the other studies have shown medication like orders and all these things could help by having a natural language processing perspective to it. There are also information contained in this text you wouldn’t find in existing coded data, which I imagine some of you are very interested in.
For example treatment options, alternative treatments are usually not there. Follow-up plans are often not coded along the line like compliance some in terms especially for example recently we were looking at clinical health notes and the suicide ideation. A lot of the symptoms indicating suicide ideation are not coded in the medical records, but you can find clues from the actual notes themselves. That is why we are interested in extracting this information from the free text.
And in order to get this data what typical, a typical processing system today does is it takes the unlabeled data and it goes to a reader. And it goes to a very series of processing steps. These are done sequentially extracting the words around value, sections and so on and so forth, all the way go to extracting concepts and in the context of the concept. And the further it may even further use machine learning, build algorithms to classify.
In this case this headline we’re showing you is trying to extract symptoms from unlabeled medication notes. And then they would write out the symptoms they found into either medical record or into a database.
But looking at this you may come across a fairly complex because after all most of the users are not familiar with each one of these modules. That’s why we have the interface component.
What this component will do is to help you use existing processing pipelines so you don’t have to be a NLP expert. As you will see it is fairly simple to use exiting processing pipelines. And you can also if you have a little bit more interest and expertise you can customize existing processing pipelines so you don’t have to do exactly what the existing pipeline does.
And that is important because just about every single natural language processing task that is presented to us by researchers has slightly different need from any other given task that we have done. We always look for some different concepts where they want a different set up or look in different sections and so and so forth.
That’s why we want to provide some capability to people who also customize the pipelines. We also provide the capability of creating new processing, sorry about the typo, new processing pipelines.
And this is important because even we know some of the users and the [eve] users actually use it for a few times. You can see it doesn’t require you to be to have a bachelor’s degree in computer science to create a pipeline.
It’s through drag and drop. You can change the other processing functions. Also we allow you to review NLP results to get a quick look at what is the performance of the pipelines. And that will also help you to further customize it or change it.
The second component as I mentioned is the backend framework. So this most of you may not be interested in, but some of you will have some interest in this because what you see on the left-hand side there is this component called FLAP.
On the right-hand side there is this component now called SLAP. And FLAP is a UIMA-based framework. It is built to allow the scale out of processing functions. It uses UIMA services and as well as clients. It can conduct to us.
So any UIMA module many of the systems today are based on UIMA. You can just put on this framework. It will run and it will provide a functionality such as passing the parameters and hiding or choosing certain annotations to pass along to the next module.
But the reality today is even though we have UIMA and GATE as probably the two dominant frameworks, there are so many modules written not on UIMA and not on GATE. That’s why we have the SLAP framework as a support of those heterogeneous modules.
And for example you see here you could have UIMA. You could have GATE or you could have a module written in any language, but as long as you speak or communicate to our pipeline using this common model to provide a notation these modules, these heterogeneous modules could be changed up.
A real example of that is for example some people may have code that’s written in Perl or ARC, and which are both of which are fairly popular in the scientific community. And as long as you wrap it up as a Web service it provides common model-based annotations. It can run on this SLAP framework. And the beauty of this is as a end user you don’t have to know about this.
We also have a repository. So again this for some of you may and this sounds too much detail, but for the NLP developers there are repositories.
One part is right now we call it Derma. Those repositories are not quite up to the production quality yet, but it’s good for collaborations. It’s working but it hasn’t been thoroughly tested.
And then there is a repository named Bones. It’s both of which are Git repositories. And those are production quality, production quality as in which we have tested so we can run within reasonable performance and time on VINCI data.
And also you can expect the modules of Bones to have documentation as well. So those will be very helpful for the NLP developers.
I would like to acknowledge my team members, Diana Carter; Ryan Cornia; Guy Divita; Scott Duvall; Tom Ginter; Shirl Nielson; Doug Redd; Ben Sunder. We just listed these names alphabetically because everybody has put in a lot of work to make this happen.
And before we ask the last two questions I think we will show you the demo. And we’ll ask you some questions about your feedback regarding the VINCI tools.
Moderator: Before you go into the demonstration would you like to take the questions that have come in thus far?
Qing Zeng: Oh sure.
Moderator: Okay. The first one is how do I bring notes, the problem-related notes?
Qing Zeng: Bring notes? I assume this is saying how do I bring notes into VINCI.
Moderator: I assume so as well, but again a reminder to our audience please do write out your questions in a clear and explanatory way. Thank you.
Qing Zeng: So there is a process of uploading your local dataset on to the VINCI. And there has been precedence of that. Also what Ryan is about to show you is several ways of grabbing the notes on VINCI. Presumably most of the notes you will be interested in is already in the VINCI repository.
You will probably only need a subset of that. So we will show you there are several ways you could point to a directory if you upload it to VINCI. Or if you have—and you can also directly access a database where VINCI has put a subset of your data in.
And there are a separate note if you are talking about non-media data. We are in the process of this releasing this software as open source. So you could download those. Either you bring the data to the software or you can bring the software to the data.
Moderator: Thank you very much for that. The next question we have is V3NLP a programming language?
Qing Zeng: No. It’s not a programming language. It’s really a software, as I said it has an interface where you can use a variety of modules and you don’t have to write any code for that. And also if you are a NLP developer you could actually wrap up your modules and put it on to this framework, and run and take advantage of the interface, the connection to the VINCI data, the ability to review results easily.
Moderator: Thank you for that response. The next question and then is there a way to access V3NLP without having a study site on VINCI? Is there some way to explore and learn it using test data?
Qing Zeng: That I have to go back and look. This is slightly tricky because VINCI is established because VA’s concern of data being ever lost or misused. So the study, the controlled data on is very tight. So you can certainly use the software.
The question is the test data. We do have a limited number of test data. I wouldn’t say they’re as good as the real data.
Moderator: Thank you for that response. We do have a few more questions that have come in and if Ryan is going to go over them during the demo then feel free to skip them for the time being. The next one, I am curious how V3NLP compares with IBM Text Analytics, which does have some NLP-based technology for text analysis, haven’t used it yet, but I’m very interested.
Qing Zeng: So as you, some of you probably know UIMA is a product that is open source product released by IBM, and which is a powerful platform. And part of our V3NLP framework is based on UIMA.
And certainly IBM has released a few text modules, but they probably have a lot more that is proprietary that we have not seen yet. So as far as they are willing to release it, it will be very easy for us to add on top of on to the V3NLP platform for everybody to use.
Some of the things they have released are specific for processing pathology reports for extracting tumor and cancer-related information from pathology notes. And those are certainly very good, but it’s a very specialized pipeline.
Moderator: Qing, the next question, what is G-I-T, Git?
Qing Zeng: Git is—I don’t know what Git stands for. Ryan, do you know?
Moderator: Okay.
Ryan Cornia: I don’t know what it stands for, but it’s a source code version management system.
Moderator: Okay, thank you.
Qing Zeng: You can deposit source code there and multiple people could easily use the GATE hub to collaborate, branch out on the software and facilitate collaborative development of software.
Moderator: Great, thanks. I’m going to take these last three questions and then hold the rest until the end. How often do you update patient notes in VINCI? When was the last update?
Qing Zeng: I think there are several answers to that. On the VINCI text depository we are using it’s not frequently updated, but the VINCI I think because so far it has been, VINCI has been trying to upload bulks of notes to catch up with historical notes, so for example we went from 600 million to 1.8 billion, not in an incremental way, but in one shot.
But what I heard is in the future, and actually partially it’s already happening is because the VINCI note is based on CDW and CDW is updated very frequently. So VINCI so far is more supporting research which doesn’t require a daily update.
Moderator: Thank you for that response. The next question is are there any practice text notes (b) identify that does not have to be real data that we can use to gain experience with the tools? I believe you may have addressed this already.
Qing Zeng: Yeah. We recognize the need and hopefully we can provide some of that [until] now.
Moderator: Thank you. The next—we’re going to do one more and then we’re going to move on. Could VINCI provide a one-on-one tutoring session for me to practice how to use V3NLP to extract a specific medical procedure in health factors?
Qing Zeng: We hope we can do that. And if that’s the only one asking for it we definitely will accommodate you. If you’re—dozens of people want that we may not be able to do that, but we’re in the process of creating a tutorial. So we certainly will be happy to help out the first few that knock on our door.
Moderator: Okay. We do have someone from VINCI that wrote in and said direct people to vaww.vinci.med.. And in the data tab you can look for the TIU for the latest update on what we have. So that’s where it is folks. It’s on the Intranet vaww.vinci.med..
Thank you for that person for writing in. And I think we’re ready for the demo.
Ryan Cornia: Okay. So what I’m showing here this is the main, initial screen of V3NLP. The app library this is where we talk about existing pipelines. So these are basically templates that you can use to get started.
So these are organized in different groupings. And obviously as we go down the road and help other people we’ll add the pipelines we can to try to get this template library as large as possible.
Or if you want to create a new pipeline you can hit create new and start with a new one. If you go to edit or run it takes you to the pipeline detail screen. And so the pipeline detail screen you’ll notice over here on the left these are these are the services that are available. And by service I mean NLP natural language processing tasks.
So for instance tokenization will take your document and separate out each of the tokens. And the sentence literal will mark your sentences marks each tagger, marks your parts of speech and those types those of things.
So the services are on the left. And then here in the main window this is the actual pipeline. It’s a tab interface so you can multiple pipelines at once.
And so for our first scenario here we’re going to run through finding ejection fraction in a set of documents. So the first task is always [thatch]. And this is where you tell it where your data is.
As Qing mentioned you can either point it to a directory on the file system. You can paste in the actual text of the document if you only have one document. You can point it to a file.
And we’re also working on a data service where it can fetch from the database and pull records that way. So for this simple example we’re going to point it to a directory. This directory has about ten or twelve [D] identified sample medical records.
And then we have a regular expression. And regular expression is a really powerful tool that we can use to satisfy a lot of the needs.
So in this example we’re just going to look for injection fraction in any of those medical documents. And then we’re going to review the results. So it’s really just a matter of streaming together your services. This is about as simple of a pipeline as we can get.
So when we run a pipeline this is where it actually submits it to the backend framework that does the processing and returns the results. So we’ll look at the results screen here. Let me maximize it.
So on the results screen the document list here shows all of the documents that met our criteria that had ejection fraction in them. So and this shows and even the number of times it was mentioned in a document.
So if I click on the document and then over here I can choose which annotations to show. So you’ll notice that it’s highlighted, which means the framework found it and annotated that mention of ejection of fraction.
And then over here on the right when I click on the annotation it shows you the metadata, or the initial information about that annotation. So for regular expression it just says that it’s a concept. And this was the regular expression.
And we also have a corporate summary that shows the summary over all of the corporates, all of the documents. So in this case my documents produced nineteen mentions of ejection fraction.
I should note too that you can add annotations. So if you come in here and you realize that let’s say in this example EF you wanted to annotate that as I mentioned. If you right click on it, highlight and right click on it you can add it as an annotation.
And then over here on corporate summary you’ll see where it shows up as a false negative because the computer didn’t mark it, but we thought it should be marked. And then you can also delete annotations, the same type of thing. If you right click on it you can choose delete and delete the annotation.
So that was a very simple example. I’m going to run through a couple more with—you’ll notice with this example we’re looking for ejection fraction that has to be spelled out. So the power of regular expressions is you can do a lot of variance with it.
So in the last example we had nineteen mentions over our corporates, but in this example we’re going to add a few more regular expressions. We want to look for EF or E.F as well.
And again we’re going to group these all under ejection fraction. You’ll also notice here we can add here regular expressions. And when we add those we have an expression library that again we hope to extend as people bring regular expressions to us, but this has a lot of the common regular expressions that people may need or use frequently.
And so this is a good place to start if you’re looking for respiratory problems. This is a the regular expression to find that.
So back to our example here we’ll run this. Now we’re looking for three variations on ejection fraction. So we’ve basically broadened the search a bit so that instead of looking for one we’re looking for the different variants.
And we’ll notice in the corporate summary we went from nineteen to twenty-five. So by broadening the search we found more results. And if I show the actual documents here you’ll see that it .EF this time, and then also .EF is spelled out on here.
So to refine the search now let’s take the scenario where we’re looking for ejection fraction, but we only want a certain range. So we want to find those that have values in the seventies.
And again the regular expressions can be a little bit tricky to create, but as we build a library or more mass users get into this they’re very easy to work with. So I’ve changed from the last pipeline I’ve added this ending here, which tells me to find ejection fraction that’s followed by some characters and then a value in the seventies.
So if we run this, so now we’ve narrowed our search to only look for those results that have results values in the seventies. And then of course it narrows it a lot down to four over our corporates.
And if we look at this you’ll notice here that it annotated EF and the value seventy, but the other thing to notice is in the capture groups that are in the metadata we have a capture group that actually captured the value. So in that instance it’s seventy. In that instance it’s seventy as well, but if we look at this example that capture group is seventy-five.
Now these results can be exported to XML so it’s very easy after you run through your processing you may want to export these and do some further processing on the results. And in this case you actually have the value of the ejection fraction, which is a really powerful tool.
So regular expressions can find a lot of these common values and just as an example here. So if I skew this to the starting point, but I said well I really want ejection fraction in the fifties, all I have to do is change these regular expressions and change the seven to a five. Let me do that really quick here.
And so now the pipeline is only going to look for values in the fifties instead of the seventies. So it provides a really interactive approach for researchers or users to be able to change the parameters quickly, and look at the results and see how many they get.
You’ll notice in this case again it found ejection fraction fifty-one and my value is fifty-one. So I could easily play with different values to find different ranges and see how large, how many mentions of certain ranges were in the corpus.
So that’s a simple example. Now we’ll talk a little bit more about Metamap, or concept mapping actually. Concept mapping in this case we’re using Metamap as the concept mapping tool.
And I’m going to run this against a single report. Concept mapping is very processing intensive. So it does take a little bit. That’s why in this example I’m just going to run it against one report.
And what we’re doing is we’re taking the whole document and we’re sending it to map the different concepts in the document. In this example we’re using Metamap.
So for instance in this document it found 162 concepts. And if you click on for instance chest tightness it’s a concept. It gives you the semantic group. This is short for disorder. And then it gives you all of the Metamap metadata about that concept.
So this is how you could go about mapping different concepts. And then to show an alternate to that you can limit it by semantic groups. So let’s say that I’m only interested in the disorders in my particular corpus. I can limit concept mapping to just that so that instead of getting 162 results I get 51. And I only get the disorders.
Now the other thing I should point out here is we can change the interface here. We can change the highlighting color. We can change it to red so it will really stand out.
And again so now we’ve gotten concepts, but we’ve only gotten those that are in the disorder category. And they’re the disorder semantic group. And again we could look for chemicals and drugs or concepts and IDs, concepts and ideas, devices, geographic areas. So we could really limit that to different groups.
And then the next thing I wanted to show was a lot of times in these reports you’re probably only going to be interested in a certain section or sections. So in this example we’re going to run Metamap again on our document, but we’re going to limit it to the—we’re going to add a sectionizer here. And we’re going to limit it to the other section.
So the sectionizer if we click under the advanced tab here, advanced users can actually go in and customize the section mapping rules so that if you’re looking for a particular section, and again this is a more advanced feature that I wouldn’t expect a lot of people to do, but it’s nice that’s available. The sections are mapped via regular expressions.
So you can actually go in and create your own custom section mapping configuration if you need to. And by this in this case we’re just going to take the default and we’re going to look in the other section.
Qing Zeng: Other is not a specific section [hider] name is a category as of just like [Anthony] codes is I think about out what they’re specified. So you could have chosen the principle diagnosis, the medication, the history or something. Other is meaning none of those other sections.
Ryan Cornia: And I should put on two of the features here too or add that exclude sections you could either include the section read and say give me all sections except for this one. And then all, almost of all our NLP processing resources have this keep annotation in results.
So when you run the processing a lot of times for instance terms, and tokens and sentences you’re probably not really interested in that in the final result. You probably just want concept or something along those lines.
So you can choose to keep the intermediate results or not. So in this case I’m going to say go ahead and keep those. And then again we’re going to do Metamap for all of the different semantic groups.
And so this time we’re running the same time of line we ran before, but we’re also sectionizing it. And so you’ll notice right here we have the concept still. And we also have the section headers.
So if I click on this it says that treatment is a section header and it’s in this particular configuration its configuration is categorized as other. And so we’ve limited the processing to just this section this time. So we’re not looking at the concepts anywhere else in the report but this section. So this is a really powerful way to really focus in on certain sections in a report.
And then we’re talking about the two different frameworks. So the pipelines that I’ve shown you before have used what we call the SLAP framework, which is GATE, sort of the non UIMA framework. We use GATE, web services, those types of things.
So now I want to show you this pipeline uses UIMA, which is the other using FLAP. It uses UIMA modules, which are very common. And we’re going to go through—this is sort of the big UIMA pipeline. We’re going to do a lot of processing on this document and we’re going to tokenize it. We’re going to go through and look for SLAP values, sentences, sections, terms, parts of speech.
We’re going to do phrase chunking. And then you’ll notice here at the bottom so we’re doing all those steps, but we’re only going to keep these final two results, which is concept extraction and negation.
And so when we run this as an end user I don’t know, I don’t need to know if those modules are GATE modules or UIMA modules. As long as they’re enabled in the V3NLP interface V3NLP treats them all as the same. So the results you get are the same even those it’s an UIMA output instead of a GATE or web service output.
So again now if I look at the report we can see now UIMA tags things slightly differently. So in UIMA concepts are marked as coded entries, but again here it’s the same user interface, the same data. You can save it to XML the same way and work with it in the same way.
It really is that heterogeneous ecosystem where we don’t care what the module is. As long as it can speed up common model and integrate it with our frameworks we can use it.
And so one of the things I wanted to show in this example though is down here at the bottom we have no headache and no joint pain. If you look at the coded entry I had negation in the pipeline. And you’ll notice right here it tells me that negation status is negated.
And so in this instance it knows the headache is negated and it adds that as metadata to the coded entry. If we look at some of these other ones, nutritional there’s no negation status because it was negated.
So you can really get a lot of metadata from that. And again we can turn on and off different annotations. So if I want to look at clinical statements I can turn those on. If I want to look at coded entries I can turn those on.
You can turn on multiple ones at once if you want. And just like the GATE pipeline you can add delete. If you go to add an annotation in this example it will let me add any of the types of annotations that I’ve chose to return in the results.
And I can also delete annotations. And again we have the same corporate summary. The user doesn’t need to know the processor type. It just comes back the same.
And then I wanted to show just quickly if we create a new pipeline it asks for a name. We’ll just call this test, but really you just get a blank slate. And as I start adding things it does do dependency checking.
So for instance you can’t do sentence splitting unless you have tokenization. So when you try to add the sentence splitter in this particular instance I have two different sentence splitters. I have a GATE one and an UIMA one. So I can choose here which one I want to use.
And when you choose between the two first of all it tells you the technology, but it also has a description of what the annotator does and tells you what it provides and it requires. Because of the dependency management it’s telling me that if I use the GATE sentence splitter it provides a sentence, but it requires a token and a space token.
So when I go to add that it’s not going to allow me to add that because I don’t have a tokenizer in my pipeline yet. So if we add a tokenization and then we add a sentence splitter it will allow me to add that now because my dependencies are met.
So as an end user you don’t have to really know how all these things work together. You just need to understand the dependencies well enough to be able to be able to add them. The little question mark here does the same thing in that it gives you a description and then tells you what it provides and what it requires.
So it really is very user friendly. The parts are documented so that you can learn and tackle this on your own.
And then in this example let me point it to a single file again and we’ll run it. And this is just anywhere that have accessible on the file system you can access.
So we’ll go to my single report here. And then again I’ll tell it to keep both of those annotations and the result. And so this time when I ran it as we would expect we got sentences, spaces and tokens.
And so again we can look at those pieces individually and they’re all highlighted there. Or we can take out the tokens and say well we’re really just interested in sentences in this particular instance.
And so now when my alpha comes back it’s much simpler. I only have those annotations that I’m interested in.
Qing Zeng: Yeah. Another thing I think Ryan can show since we have a little bit of time is there is the batch processing capability. So once you’re happy with the pipeline you can send the pipeline to batch processing, which will be significantly, tremendously faster than if you do it through this interface.
Another thing I want to mention is that the capability to mix/match things, for example you can use regular expression to find the sentence and then pass the sentence to let’s say Metamap to account that extraction. And there are many options. You can first select sentences and then pass it to Metamap, or pass the whole document to Metamap.
A lot of these are—and we have actually the margins on the left are the modules we already have. And we’re in the process of adding more modules to them because we have some modules we already tested on UIMA we haven’t added on.
We’re also in the process of incorporating the series modules from similar systems, for example the Y-Tech system, which is developed by our collaborators in Yale and [White Season] VA, as well as our colleagues at San Diego. [Wendy Chapman] school has [fo pads]. We’re trying to move that around.
So there are number of these modules from various groups, especially our collaborators in the [Chair] project. For example [Mitre] has given us very good assertion modules that could replace or provide alternatives to this negation module because it will tell you not just the concept is negated, but also the certainty and hypothetical data, so on and so forth.
So this is as we said this is really a framework. It’s a platform that can host a variety of modules. So it’s not a classic NLP system that you can really just apply to some of those which will give you one type of result. You can manipulate things on this on V3 and get any results you want.
Ryan Cornia: I guess I should also point out of course we allow seaming of pipelines. So you can save this pipeline to your local machine and use it later or send it to somebody else and they can load it on theirs.
You can also load saved results. So if you’re on the pipeline you can save those results. You can reload them back in later, review them, add annotations or remove annotations. So we do have, of course we have the standard loading and saving of both the pipelines and the data.
Qing Zeng: I think we have ten minutes left. We can go to our poll question and we can also take some questions.
Moderator: Okay. Thank you very much. I will put up the poll question in just one second. And we did have several questions come in during that, during that demonstration.
Okay. I’m going to go ahead and launch the poll question. Attendees, you do see that on your screen now. Just click the circle next to the answer that best describes how like are you to use a tool like V3NLP for processing clinical notes, extremely likely, probably, maybe, not very likely.
And we have had half of our attendees already submit responses, now up to sixty percent. So we will give this just a few more seconds. And when these responses are done coming in I will close it. I will take back the screen so that you both can read through the responses real quick.
Okay. We’re up to seventy percent of people have voted. I will give it ten more seconds and then we’re going to close it out.
And they’ve stopped coming in. So I’m going to close it now. And I’m going to share it, take control of the screen. And can you see the results, Ryan and Qing?
Ryan Cornia: No.
Moderator: Okay, one second here. All right, now you should be able to see them.
Qing Zeng: Okay, great. Yeah, thank you.
Moderator: Okay. I’ll give you back the screen now. There you are. And you can go ahead and let me know.
Oh we have one more question. And this is another open-ended one so you will need to type your answer into the question box which is located at the right-hand side of your screen on the Go to Webinar dashboard. That question is what additional features would you like to see?
So we’ll give people some time to type those in. So again to answer this final question, what additional features would you like to see, please go to your Go to Webinar dashboard on the right-hand side of your screen. Open up the question section. Type in your response and the VINCI team will have a chance to review your answers after the session.
So we will give people another minute or two to type those in. And while we’re waiting if you don’t have any requests feel free not to respond. And we’ll move on to our questions at this time. One second.
The first question we have is, and you may have gone over this, how do Git repositories relate to “Derma” and “Bones?”
Ryan Cornia: So basically we have two different Git repositories, one that we store well tested, production-ready enterprise quality code that we call Bones. And then we have the other repository that we store more of the research quality, still being collaborated on, not fully tested code. And the long-term vision is as code becomes used, and enterprised and the quality added then we will move it from Derma to Bones.
Moderator: Thank you for that response. We do have ten questions remaining and more are coming in. My next one is, aren’t health factors stored as field-based rather than in text notes?
Qing Zeng: I am not very familiar with what health factors refer to.
Moderator: Okay. If they would like to write in further information they’re welcome to do so. The next question, often ejection fraction would be written in short form, for example EF or eject F. How would you get those? I believe you may have covered this.
Ryan Cornia: Yeah, we did. You just do multiple regular expressions with the different variants on the spelling.
Qing Zeng: That’s right. So it’s helpful that we provide, and it’s by no means comprehensive even though it has some of the regular expressions in the library in the software. We do encourage developers once you create your own set of regular expressions because we have heard somebody actually created about 200 regular expressions for EF alone. And we know there are many for multiple concepts and stuff. Save that and so this work doesn’t get duplicated over time.
Moderator: Thank you for that response. The next question we have, what happens if the expressions are misspelled? Will it be identified?
Ryan Cornia: No. It’s regular expressions so it matches using the standard regular expressions rules. So if you have misspellings it doesn’t do—it matches exactly.
Moderator: Thank you. The next question is being written in by [E Win], the VINCI staff. And she says health factor contains health factor type, which is a field-base, not text base. So thank you for writing that in.
And the next question we have is there a pipeline for smoking history?
Qing Zeng: There is not a pipeline for smoking history, but we do have a module that performs smoking history extraction. We’re going to load that up shortly. We have had that for a long time.
Moderator: Thank you for that response. The next question, are the concepts in concept mapping pre-populated in the software, or does the program use sentence structure to determine whether a term is a concept?
Qing Zeng: The concept is defining that our concept structure module defines the concept as in terms of coming after to you will add concepts. You will add sentence for unified for unified medical language systems. It contains about a hundred, over a hundred actually 150 vocabularies including [Snow Med], ICD and a whole bunch of other vocabularies.
Moderator: Thank you for that response. The next question, when do you expect to resolve database connectivity issues and VINCI?
Qing Zeng: That question should be written to the VINCI staff.
Moderator: Thank you. The next question, what are the current options to use NLP on radiology reports? Specifically radiology reports are currently only available as raw data. Would V3NLP still work? And are there any plans to add radiology reports to the repository?
Qing Zeng: So the concept extraction pipeline will work on x-ray reports, but there are more specialized pipelines that have been developed or software has been developed for processing radiology reports. In particular I want to point out there is an effort which is part of [Cheer] that developed a specific pipeline to extract mention of medical devices from x-ray reports that developed by VA Palo Alto. And I believe there is a plan to load that up on to the VINCI and provide that to the VINCI users.
Moderator: Thank you for that response. I do see that we are close to the top of the hour. If you are available I would like to continue asking a few more questions in hopes that we capture them in the recording. How does that work out for your schedule, Ryan and Qing?
Qing Zeng: Well we can answer one more question because unfortunately there is another group waiting to use the conference room.
Moderator: Okay. For all of our attendees we do have several pending questions. I am going to send them to our presenters offline and if they have time hopefully I can get some written responses. And then I will distribute those to you in an e-mail and also they will be posted with the archive on our cyberseminar catalogue located at HSR&D homepage. And that’s on the left [naz] bar.
Also before any of our attendees leave I would like you know that when you exit the session please hold on for a second before closing your browser as you will be prompted to complete a survey which provides us vital feedback to continue taking your opinions into account when presenting these sessions.
So the next question is so what is the mechanism for adding our own custom written NLP components to V3NLP?
Ryan Cornia: That’s a great question. As long as they speak common model it’s just a matter of changing the configuration. So we can add them very easily. Are you there, Molly?
Moderator: Yes. I apologize. I was getting my wrap-up notes, but apparently I’ll do it myself. I just wanted to thank our attendees for joining us. I very much want to thank you and Qing for presenting your expertise and your I’m glad demo today. Would you like to make any concluding comments?
Qing Zeng: No thank you. And feel free to e-mail us for any further questions.
Moderator: Thank you very much. And to our attendees please after you leave the session stay on your web browser for a minute and you will be prompted to do a feedback survey, which we do take your opinions into account when updating these sessions.
So with that this does formally conclude today’s HSR&D cyberseminar. And I would like to thank each of you for joining us. Have a nice day.
[End of Recording]
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- new movies available on xfinity
- best stock screener available on internet
- what channels are available on spectrum tv
- what tv channels are available on netflix
- stocks available on robin hood
- games available on xcloud
- is xcloud available on pc
- discount tools online on clearance
- available ports on this computer
- da vinci sugar free
- natural language processing using python
- natural language processing in python