Glad I had a chance to talk to you about this manuscript- and again, I wish we could always give more detailed feedback on our assessment of work we return- but a speedy response is important to us- and authors. But, I do understand that more information can be quite helpful (plus it's disheartening to get a "Sorry, we can't consider your manuscript further" letter that only has a comment that sounds quite pat and standard). It can make the authors worry that the editors didn't do much of an assessment beyond 'reading the abstract' or similar: or as you noted, maybe they missed the point. I hope the following indicates the detailed work we typically do to assess submitted work, but more importantly that it will help you better understand what criteria we used to make that decision- and hopefully help for your submission elsewhere. One of the biggest concerns you had was that the editors perhaps thought that it was the example you included that was the main part of the paper- but that was definitely not the case, as their discussion wasn't at all part of the system. So, you can rest easy about that.
For the editorial assessment and decision, this was what was done and how the ultimate opinion was made- and as noted in the letter: it finally came down to- we get a lot of these papers, and for diversity sake in a journal for the overall audience- we can't publish them all: so after full consideration, where does it fall in content and usability with the others we have recently considered (obviously work we've assessed last year, may not be as advanced as the work you submitted today, but things we have published that were submitted less recently are obviously judged on where things were in the field at that time. It's always a moving target...
Okay: assessment:
We had one team member who has expertise on data visualization and other things in that arena. We always check if things work, of course, and also what the user would get from it relative to what else is out there. They did feel it was close to the border of our acceptance level- but that the amount of added value to what is out there wasn't enough to put it over that line. They noted that it definitely could allow for visualization of large data sets, but that overall (and not to say this isn't a lot of work on your end) with some minor additions of course, it was more of a wrapper of existing tools that were tied together. ie Cyverse API and IGB) There was more to it than that- but we come away with a big picture assessment after going through things.
They also noted- that they couldn't get it to work (which is not to say- it DOESN'T work- but for us, we couldn't get it to run- and that triggers concern about stability- or, if we did something incorrect, it is likely that a more naive user would have a tough time too- and that limits the growth of the field using this.
This is what happened when they tested it (the person assessing it has a cyverse account):
On following the instruction to download IGB and link the BioViz to Cyverse, it did indeed use bioViz in connection with the browser to view cyverse- so that was good, but, on clicking the "View in IGB" button it said "checking if IGB is running" (which it was) and then nothing else happened. So, for submission- you need to check if there is something in your instructions that is missing, or if there is something unstable between the two that sometimes works and sometimes doesn't. There may be something missing in the text- so you will want to follow it exactly to check that, because that's what the more naive reader would use. Whatever the case- this was our situation.
Then on to how likely people would be to use it: there was general agreement that the trend at this time is for use of online web-based applications, so the need to download and install local software to view a genome online seemed less likely to be used very quickly. -And here is another possible reason for why we couldn't get it to work: something happened during the download, or something on our system interfered... we're guessing..) Maybe test your work on multiple different computers?
Regardless- though more and more people are looking for web0based applications: so that again reduced our interest.
Our assessor also ran the Panda genome through the IGB viewer (not through BioViz connect) to see how just that worked- which has an impact on how useful having a connection of this type is. He wasn't familiar with the IGB viewer, and found it clunky and difficult to follow- but that's just a user interface note- not an assessment of your work, and put it in the camp of 'learning curve for new user' not part of our decision, But, I figured it was information that may or may not be of use to you- so I add it here.
However, when he ran IGB viewer with the panda genome- it struggled with the larger chromosomes, and for some reason was restricted to using 3GB memory on the machine (we used a machine that had 32 GB available).
We couldn't figure out why that was the case- but it did mean that either that is how it works- or that is something a user might need additional knowledge to get past that.
Taking all of this together- we ultimately put your paper on the side of falling under the bar relative to other papers we are getting.
I hope this is helpful... and also how difficult it can be to write out the full details of what we did, since testing editors provide info written out in a way that makes sense to us- but we would need to rewrite it to make everything completely clear to the authors.... So, speed trumps details- even though there are a lot of them (there's more also, but that was in super short hand that I couldn't quite figure out to write in any sensical manner.
So sorry we had to decline to publish, but glad I had the opportunity to provide you with a view to how much we do to assess work... we take everyone's work very seriously because we know how important it is to the folks who did the work.
All of this said- we all think that there are certainly other journals that would consider this work. So, do send it elsewhere, as of course we expect you would!
Glad I had a chance to talk to you about this manuscript- and again, I wish we could always give more detailed feedback on our assessment of work we return- but a speedy response is important to us- and authors. But, I do understand that more information can be quite helpful (plus it's disheartening to get a "Sorry, we can't consider your manuscript further" letter that only has a comment that sounds quite pat and standard). It can make the authors worry that the editors didn't do much of an assessment beyond 'reading the abstract' or similar: or as you noted, maybe they missed the point. I hope the following indicates the detailed work we typically do to assess submitted work, but more importantly that it will help you better understand what criteria we used to make that decision- and hopefully help for your submission elsewhere. One of the biggest concerns you had was that the editors perhaps thought that it was the example you included that was the main part of the paper- but that was definitely not the case, as their discussion wasn't at all part of the system. So, you can rest easy about that.
For the editorial assessment and decision, this was what was done and how the ultimate opinion was made- and as noted in the letter: it finally came down to- we get a lot of these papers, and for diversity sake in a journal for the overall audience- we can't publish them all: so after full consideration, where does it fall in content and usability with the others we have recently considered (obviously work we've assessed last year, may not be as advanced as the work you submitted today, but things we have published that were submitted less recently are obviously judged on where things were in the field at that time. It's always a moving target...
Okay: assessment:
We had one team member who has expertise on data visualization and other things in that arena. We always check if things work, of course, and also what the user would get from it relative to what else is out there. They did feel it was close to the border of our acceptance level- but that the amount of added value to what is out there wasn't enough to put it over that line. They noted that it definitely could allow for visualization of large data sets, but that overall (and not to say this isn't a lot of work on your end) with some minor additions of course, it was more of a wrapper of existing tools that were tied together. ie Cyverse API and IGB) There was more to it than that- but we come away with a big picture assessment after going through things.
They also noted- that they couldn't get it to work (which is not to say- it DOESN'T work- but for us, we couldn't get it to run- and that triggers concern about stability- or, if we did something incorrect, it is likely that a more naive user would have a tough time too- and that limits the growth of the field using this.
This is what happened when they tested it (the person assessing it has a cyverse account):
On following the instruction to download IGB and link the BioViz to Cyverse, it did indeed use bioViz in connection with the browser to view cyverse- so that was good, but, on clicking the "View in IGB" button it said "checking if IGB is running" (which it was) and then nothing else happened. So, for submission- you need to check if there is something in your instructions that is missing, or if there is something unstable between the two that sometimes works and sometimes doesn't. There may be something missing in the text- so you will want to follow it exactly to check that, because that's what the more naive reader would use. Whatever the case- this was our situation.
Then on to how likely people would be to use it: there was general agreement that the trend at this time is for use of online web-based applications, so the need to download and install local software to view a genome online seemed less likely to be used very quickly. -And here is another possible reason for why we couldn't get it to work: something happened during the download, or something on our system interfered... we're guessing..) Maybe test your work on multiple different computers?
Regardless- though more and more people are looking for web0based applications: so that again reduced our interest.
Our assessor also ran the Panda genome through the IGB viewer (not through BioViz connect) to see how just that worked- which has an impact on how useful having a connection of this type is. He wasn't familiar with the IGB viewer, and found it clunky and difficult to follow- but that's just a user interface note- not an assessment of your work, and put it in the camp of 'learning curve for new user' not part of our decision, But, I figured it was information that may or may not be of use to you- so I add it here.
However, when he ran IGB viewer with the panda genome- it struggled with the larger chromosomes, and for some reason was restricted to using 3GB memory on the machine (we used a machine that had 32 GB available).
We couldn't figure out why that was the case- but it did mean that either that is how it works- or that is something a user might need additional knowledge to get past that.
Taking all of this together- we ultimately put your paper on the side of falling under the bar relative to other papers we are getting.
I hope this is helpful... and also how difficult it can be to write out the full details of what we did, since testing editors provide info written out in a way that makes sense to us- but we would need to rewrite it to make everything completely clear to the authors.... So, speed trumps details- even though there are a lot of them (there's more also, but that was in super short hand that I couldn't quite figure out to write in any sensical manner.
So sorry we had to decline to publish, but glad I had the opportunity to provide you with a view to how much we do to assess work... we take everyone's work very seriously because we know how important it is to the folks who did the work.
All of this said- we all think that there are certainly other journals that would consider this work. So, do send it elsewhere, as of course we expect you would!