Hey people, I’m at the Cochrane Colloquium in Quebec and excited in a way that I have never been excited at one of these events. There is an bubbling undercurrent of innovative ideas for reducing the difficulty and time that it takes to write systematic reviews and it feels like real tangible change is possible. Using tech, we could ramp up the production of systematic reviews 10, 100 or even 1000 fold according to Julian Elliott.
From what I can tell, the bubbles are about to burst forth and revolutionize the production of systematic reviews in the following ways:
We already have the Cochrane Register of Studies. All the CGRs (Cochrane Review Groups) now have their register of trials into the CRS and work can really begin in earnest on ironing out the issues. Namely applying a unique CRS identifier code to each record to overcome difficulties in duplications some groups (including my group, Airways) are having. This technical problem with duplicates is preventing us from capitalising on what the CRS can offer – which will for sure be hugely powerful. The developers (metaxis) are working on this – or will be as I understand it.
At the moment Liz is using the CRS to store and search our references on asthma, COPD, bronchiectasis, etc. Emma J is uploading every translation she gets and other documents associated with the references. But what would be really awesome, is that we will be able to store and share documents with others, and work towards full studification of the trials.
At the #CochraneTech meeting our group (number 2D) discussed the idea of a CRSD (Cochrane Register of Study Data). This idea is using the CRS as a platform for storing all kinds of information – extracted data, emails from people who run the trials, risk of bias assessment, you name it – so that it is available for all to be used for a variety of things. including: updating the review; writing overviews of Cochrane Reviews; writing new reviews on a different question; re-analysing the data for guideline writers; the list goes on.
Covidence is a platform for helping review authors from the minute they get the references for their review (from the literature search) to the time they export the data out into RevMan to write up the review. At the moment covidence stores the data about the trials as part of the review, but the idea is that the data will be available to be used (in the ways I described above) in the future. I’ll be trying covidence out very soon and hopefully we can get our Airways authors using it soon after.
You can already move the extracted data straight across into RevMan and Julian Elliott strongly believes that this software will continue to link to the Cochrane software. So I think it’s the best bet for being supported long term and knowing that the software will be developed and improved for Cochrane’s needs or that if something else is developed in the future that the data will be moved into the new software. i.e. when the CRSD (or whatever) comes about you can be fairly certain that covidence will link to it so we can share the lovely data.
Review Exchange, or REx, is another project for a website that allows people to post systematic review tasks that they need help with and people to volunteer to do it. The site designs look neat, like a regular social media site where you can upload and link your profile. The person posting the task can say what rewards the person will get for doing the task – for instance authorship, acknowledgment in the review, mentoring, credits.
I get so many requests from people who want to get involved in systemic reviews, but presently the only way is for them to take on a whole review, or sometimes I can hook them up with a team (sometimes this works, more often it fails as everyone’s expectations are out of kilter). SO I end up writing lame emails pretending saying I will try to put them in touch with a team, but in reality not having the time or possibility to do this in a meaningful or helpful way. If we had this ace tool, people could dive right in and try something – say extracting data for a few reviews, updating the risk of bias. Then, if they liked it and were good at it, they could either carry on working on the review, or get credits/endorsement and then start working on another project until they had built up the skill set and experience to be more fully involved in the review process.
Why is this so desperately needed and potentially brilliant?
Well I personally believe that it is a crime that we don’t record our data in a way that can be used to make peoples lives better. “Sounds a bit dramatic – what are you going on about Emma???”
The alltrials campaign (to which Cochrane is affiliated) calls for the results of all clinical trials to be reported. They also call for all clinical trials to be registered so that we can keep an eye on them to check that the data have been reported. I think that campaign is brilliant, you should sign the petition and consider donating money if you have not already.
But one should always be willing to check their own house is in order.
At the moment, everyone who does a systematic review is sorting their information in a slightly different way – word document, google doc, spreadsheet, in the cloud, in pen and ink and in a different format. it is labour intensive – I reckon it takes about 2 hours per trial, in duplicate, and then you have to manually go through and check if you agreed with your co-author. If you wished you had extracted any more information at any stage in the review process you have to go back to the original papers (I am sure I am not the only one to have done this directly into the review thus making my data extraction sheets out-of-date, useless and not duplicated). This isn’t too bad if that person stays in control if the review as they will probably keep the data and remember where they kept it, but of they move job or hand the review over to someone else (or their hard drive is wiped etc), then the data can get lost, or can be too difficult for the new team to interpret so it might have to be extracted again.
From an editorial point of view, if the extracted data is made available and we can see the transformations of data reported in trials we can verify the accuracy of the review. If there are problems, we could make the changes to the extracted data and re-analyse it at the touch of a button maintaining the accuracy of the data which is stored for use in future versions of the review and other applications as well as the systematic review in question. It’s a win win.
Let us share the data we take from clinical trials to use as the basis for our reviews. This will allow other people use it to make better healthcare decisions. We are asking people who run clinical trials to share their data, so we must make our own data (taken from those same trials) freely available, transparent and reusable (to prevent duplication of effort). The strategic report and vision to 2020 speaks to this.
What do you think?
p.s. I wrote this post to update anyone interested on the current status of #CochraneTech as I see it from the conversations I have had in official sessions and down the pub here in #CochraneQuebec – so if it’s not right let me and everyone else know in the comments (tell me where I’m wrong in the pub too obviously, but do join the discussion in the comments lol). There are other brilliant programs, for example SRDR, but Cochrane has to chose one and go for it. Every month we publish around 40 new reviews – this is 40 lots of data stuck forever on someone’s computer. We have to quickly chose one sensible idea and nurture it – we can always change it an improve on it. We can benefit from learning from other software as others have benefited from Cochrane’s ideas, methods and software.