Uploaded image for project: 'IGB'
  1. IGB
  2. IGBF-2481

Edit BioViz Connect Paper based on GigaScience info

    Details

    • Type: Task
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Done
    • Affects Version/s: None
    • Fix Version/s: None
    • Labels:
      None
    • Story Points:
      3
    • Sprint:
      Summer 4: 14 Jul - 28 Jul, Summer 5: 3 Aug - 14 Aug, Summer 6: 17 Aug - 28 Aug

      Description

      Situation: The BioViz Connect paper was not accepted for review in GigaScience with the following message:

      "After consultation with the Editors and Board we have now assessed your manuscript and regret to inform you that it cannot be accepted for publication in GigaScience. We are now receiving more papers covering workflows and pipelines than we can handle, and our thresholds have had to keep pace and add utility and novelty beyond just being a users guide, as well as demonstrate clear improvements against the state of the art. Unfortunately this doesn't quite meet those thresholds but I hope you find a good home for the work." - Scott Edmunds

      I spoke with Laurie Goodman from GigaScience during the BCC2020 conference. She said that the email from Scott may have been more of a generic response, as they no longer provide precise feedback when they choose not to accept a paper for review. However, I think the paper does need to be edited to better emphasize the novelty (first use of Terrain) and technology.

      Laurie also pointed out that they are wary of accepting papers focused on websites, as those websites may start off free but then move behind a paywall or require creating an account. We need to emphasize that the login is dependent on CyVerse (to see the data) and that we will not create a paywall (funded by NIH - this may be important to mention in the cover letter).

      This Galaxy paper may be a good example to work off of for our paper. However, Laurie mentioned that Galaxy and GigaScience have a collaboration together, which explains why so many Galaxy papers appear in GigaScience.

      Task: Edit the paper to emphasize the novelty (use of Terrain), functionality (useful for all CyVerse and IGB users), technology (add technology stack figure), and security (emphasize how login works through CyVerse - may be worth including login technology figure).

        Attachments

          Activity

          Hide
          nfreese Nowlan Freese added a comment -

          Glad I had a chance to talk to you about this manuscript- and again, I wish we could always give more detailed feedback on our assessment of work we return- but a speedy response is important to us- and authors. But, I do understand that more information can be quite helpful (plus it's disheartening to get a "Sorry, we can't consider your manuscript further" letter that only has a comment that sounds quite pat and standard). It can make the authors worry that the editors didn't do much of an assessment beyond 'reading the abstract' or similar: or as you noted, maybe they missed the point. I hope the following indicates the detailed work we typically do to assess submitted work, but more importantly that it will help you better understand what criteria we used to make that decision- and hopefully help for your submission elsewhere. One of the biggest concerns you had was that the editors perhaps thought that it was the example you included that was the main part of the paper- but that was definitely not the case, as their discussion wasn't at all part of the system. So, you can rest easy about that.

          For the editorial assessment and decision, this was what was done and how the ultimate opinion was made- and as noted in the letter: it finally came down to- we get a lot of these papers, and for diversity sake in a journal for the overall audience- we can't publish them all: so after full consideration, where does it fall in content and usability with the others we have recently considered (obviously work we've assessed last year, may not be as advanced as the work you submitted today, but things we have published that were submitted less recently are obviously judged on where things were in the field at that time. It's always a moving target...

          Okay: assessment:

          We had one team member who has expertise on data visualization and other things in that arena. We always check if things work, of course, and also what the user would get from it relative to what else is out there. They did feel it was close to the border of our acceptance level- but that the amount of added value to what is out there wasn't enough to put it over that line. They noted that it definitely could allow for visualization of large data sets, but that overall (and not to say this isn't a lot of work on your end) with some minor additions of course, it was more of a wrapper of existing tools that were tied together. ie Cyverse API and IGB) There was more to it than that- but we come away with a big picture assessment after going through things.

          They also noted- that they couldn't get it to work (which is not to say- it DOESN'T work- but for us, we couldn't get it to run- and that triggers concern about stability- or, if we did something incorrect, it is likely that a more naive user would have a tough time too- and that limits the growth of the field using this.

          This is what happened when they tested it (the person assessing it has a cyverse account):

          On following the instruction to download IGB and link the BioViz to Cyverse, it did indeed use bioViz in connection with the browser to view cyverse- so that was good, but, on clicking the "View in IGB" button it said "checking if IGB is running" (which it was) and then nothing else happened. So, for submission- you need to check if there is something in your instructions that is missing, or if there is something unstable between the two that sometimes works and sometimes doesn't. There may be something missing in the text- so you will want to follow it exactly to check that, because that's what the more naive reader would use. Whatever the case- this was our situation.

          Then on to how likely people would be to use it: there was general agreement that the trend at this time is for use of online web-based applications, so the need to download and install local software to view a genome online seemed less likely to be used very quickly. -And here is another possible reason for why we couldn't get it to work: something happened during the download, or something on our system interfered... we're guessing..) Maybe test your work on multiple different computers?
          Regardless- though more and more people are looking for web0based applications: so that again reduced our interest.

          Our assessor also ran the Panda genome through the IGB viewer (not through BioViz connect) to see how just that worked- which has an impact on how useful having a connection of this type is. He wasn't familiar with the IGB viewer, and found it clunky and difficult to follow- but that's just a user interface note- not an assessment of your work, and put it in the camp of 'learning curve for new user' not part of our decision, But, I figured it was information that may or may not be of use to you- so I add it here.
          However, when he ran IGB viewer with the panda genome- it struggled with the larger chromosomes, and for some reason was restricted to using 3GB memory on the machine (we used a machine that had 32 GB available).
          We couldn't figure out why that was the case- but it did mean that either that is how it works- or that is something a user might need additional knowledge to get past that.

          Taking all of this together- we ultimately put your paper on the side of falling under the bar relative to other papers we are getting.

          I hope this is helpful... and also how difficult it can be to write out the full details of what we did, since testing editors provide info written out in a way that makes sense to us- but we would need to rewrite it to make everything completely clear to the authors.... So, speed trumps details- even though there are a lot of them (there's more also, but that was in super short hand that I couldn't quite figure out to write in any sensical manner.

          So sorry we had to decline to publish, but glad I had the opportunity to provide you with a view to how much we do to assess work... we take everyone's work very seriously because we know how important it is to the folks who did the work.

          All of this said- we all think that there are certainly other journals that would consider this work. So, do send it elsewhere, as of course we expect you would!

          Show
          nfreese Nowlan Freese added a comment - Glad I had a chance to talk to you about this manuscript- and again, I wish we could always give more detailed feedback on our assessment of work we return- but a speedy response is important to us- and authors. But, I do understand that more information can be quite helpful (plus it's disheartening to get a "Sorry, we can't consider your manuscript further" letter that only has a comment that sounds quite pat and standard). It can make the authors worry that the editors didn't do much of an assessment beyond 'reading the abstract' or similar: or as you noted, maybe they missed the point. I hope the following indicates the detailed work we typically do to assess submitted work, but more importantly that it will help you better understand what criteria we used to make that decision- and hopefully help for your submission elsewhere. One of the biggest concerns you had was that the editors perhaps thought that it was the example you included that was the main part of the paper- but that was definitely not the case, as their discussion wasn't at all part of the system. So, you can rest easy about that. For the editorial assessment and decision, this was what was done and how the ultimate opinion was made- and as noted in the letter: it finally came down to- we get a lot of these papers, and for diversity sake in a journal for the overall audience- we can't publish them all: so after full consideration, where does it fall in content and usability with the others we have recently considered (obviously work we've assessed last year, may not be as advanced as the work you submitted today, but things we have published that were submitted less recently are obviously judged on where things were in the field at that time. It's always a moving target... Okay: assessment: We had one team member who has expertise on data visualization and other things in that arena. We always check if things work, of course, and also what the user would get from it relative to what else is out there. They did feel it was close to the border of our acceptance level- but that the amount of added value to what is out there wasn't enough to put it over that line. They noted that it definitely could allow for visualization of large data sets, but that overall (and not to say this isn't a lot of work on your end) with some minor additions of course, it was more of a wrapper of existing tools that were tied together. ie Cyverse API and IGB) There was more to it than that- but we come away with a big picture assessment after going through things. They also noted- that they couldn't get it to work (which is not to say- it DOESN'T work- but for us, we couldn't get it to run- and that triggers concern about stability- or, if we did something incorrect, it is likely that a more naive user would have a tough time too- and that limits the growth of the field using this. This is what happened when they tested it (the person assessing it has a cyverse account): On following the instruction to download IGB and link the BioViz to Cyverse, it did indeed use bioViz in connection with the browser to view cyverse- so that was good, but, on clicking the "View in IGB" button it said "checking if IGB is running" (which it was) and then nothing else happened. So, for submission- you need to check if there is something in your instructions that is missing, or if there is something unstable between the two that sometimes works and sometimes doesn't. There may be something missing in the text- so you will want to follow it exactly to check that, because that's what the more naive reader would use. Whatever the case- this was our situation. Then on to how likely people would be to use it: there was general agreement that the trend at this time is for use of online web-based applications, so the need to download and install local software to view a genome online seemed less likely to be used very quickly. -And here is another possible reason for why we couldn't get it to work: something happened during the download, or something on our system interfered... we're guessing..) Maybe test your work on multiple different computers? Regardless- though more and more people are looking for web0based applications: so that again reduced our interest. Our assessor also ran the Panda genome through the IGB viewer (not through BioViz connect) to see how just that worked- which has an impact on how useful having a connection of this type is. He wasn't familiar with the IGB viewer, and found it clunky and difficult to follow- but that's just a user interface note- not an assessment of your work, and put it in the camp of 'learning curve for new user' not part of our decision, But, I figured it was information that may or may not be of use to you- so I add it here. However, when he ran IGB viewer with the panda genome- it struggled with the larger chromosomes, and for some reason was restricted to using 3GB memory on the machine (we used a machine that had 32 GB available). We couldn't figure out why that was the case- but it did mean that either that is how it works- or that is something a user might need additional knowledge to get past that. Taking all of this together- we ultimately put your paper on the side of falling under the bar relative to other papers we are getting. I hope this is helpful... and also how difficult it can be to write out the full details of what we did, since testing editors provide info written out in a way that makes sense to us- but we would need to rewrite it to make everything completely clear to the authors.... So, speed trumps details- even though there are a lot of them (there's more also, but that was in super short hand that I couldn't quite figure out to write in any sensical manner. So sorry we had to decline to publish, but glad I had the opportunity to provide you with a view to how much we do to assess work... we take everyone's work very seriously because we know how important it is to the folks who did the work. All of this said- we all think that there are certainly other journals that would consider this work. So, do send it elsewhere, as of course we expect you would!
          Hide
          nfreese Nowlan Freese added a comment -

          Glad it was helpful. At GigaScience we definitely wish we could provide this type of information to all authors (We know it can be very useful- especially for junior researchers)- but unfortunately time constraints just make it impossible. But, in this case I was happy to be able to take a moment.

          We’re not sure either about why it didn’t run- but the notes on your manuscript from the tester didn’t say what type of computer or browser they used (though I can’t imagine he was using safari... I know that internet explorer just came out with a new browser called Edge... but again, I don’t know what he used. I could ask if you think it would be useful, though.

          Show
          nfreese Nowlan Freese added a comment - Glad it was helpful. At GigaScience we definitely wish we could provide this type of information to all authors (We know it can be very useful- especially for junior researchers)- but unfortunately time constraints just make it impossible. But, in this case I was happy to be able to take a moment. We’re not sure either about why it didn’t run- but the notes on your manuscript from the tester didn’t say what type of computer or browser they used (though I can’t imagine he was using safari... I know that internet explorer just came out with a new browser called Edge... but again, I don’t know what he used. I could ask if you think it would be useful, though.
          Hide
          nfreese Nowlan Freese added a comment -

          Paper submitted to BMC Bioinformatics

          Show
          nfreese Nowlan Freese added a comment - Paper submitted to BMC Bioinformatics

            People

            • Assignee:
              nfreese Nowlan Freese
              Reporter:
              nfreese Nowlan Freese
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: