Uploaded image for project: 'IGB'
  1. IGB
  2. IGBF-3264

Set up new Quickload infrastructure on UNCC hosting

    Details

    • Type: Task
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Done
    • Affects Version/s: None
    • Fix Version/s: None
    • Labels:
      None
    • Story Points:
      5
    • Sprint:
      Spring 4 2023 Feb 21, Spring 9 2023 May 1, Summer 1 2023 May 15, Summer 2 2023 May 29, Summer 3 2023 June 12, Summer 4 2023 June 26, Summer 5 2023 July 10, Summer 6 2023 July 24, Summer 7 2023 Aug 7, Summer 8 2023 Aug 21

      Description

      Ann requested more space and fix to "igbquickload.org" resolution on-campus.

      See attached email explaining details and work that was done on it by UNCC IT professional Michael Cowan:

      Tasks:

      • migrate data to the new location and test access
      • create new git code repository for managing Integrated Genome Browser Quickload sites for the project
      • create Web page with Javascript button users can click to add the new quickload site to IGB

      References:

        Attachments

          Issue Links

            Activity

            Hide
            ann.loraine Ann Loraine added a comment -

            Check the new disk with:

            lsblk
            

            Check linux release:

            uname -a
            Linux cci-vm12 2.6.32-431.23.3.el6.x86_64 #1 SMP Wed Jul 16 06:12:23 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
            

            Check linux distribution:

            cat /etc/system-release
            Red Hat Enterprise Linux Server release 6.5 (Santiago)
            
            Show
            ann.loraine Ann Loraine added a comment - Check the new disk with: lsblk Check linux release: uname -a Linux cci-vm12 2.6.32-431.23.3.el6.x86_64 #1 SMP Wed Jul 16 06:12:23 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux Check linux distribution: cat /etc/system-release Red Hat Enterprise Linux Server release 6.5 (Santiago)
            Hide
            ann.loraine Ann Loraine added a comment -

            Made new repo with "main" as main branch: https://bitbucket.org/hotpollen/genome-browser-visualization/src/main/
            Moving code from other repositories into this one.

            Show
            ann.loraine Ann Loraine added a comment - Made new repo with "main" as main branch: https://bitbucket.org/hotpollen/genome-browser-visualization/src/main/ Moving code from other repositories into this one.
            Hide
            ann.loraine Ann Loraine added a comment -

            Michael Cowan One IT set up the disk and configured it:

            ... it’s mounted at /mnt/igbdata . I would be grateful if you could log on sometime soon and verify everything looks good. I’ll delete the snapshot I made once you do.

            Show
            ann.loraine Ann Loraine added a comment - Michael Cowan One IT set up the disk and configured it: ... it’s mounted at /mnt/igbdata . I would be grateful if you could log on sometime soon and verify everything looks good. I’ll delete the snapshot I made once you do.
            Hide
            ann.loraine Ann Loraine added a comment - - edited
            • Disk mounting looks OK (ls, chmod, work as expected)

            Next:

            • Enable apache to access new disk mount
            • Find out why scp installed on igbquickload.org host lacks "-J" option
            • Probably need to update this VM. Not sure how. Look into it.
            Show
            ann.loraine Ann Loraine added a comment - - edited Disk mounting looks OK (ls, chmod, work as expected) Next: Enable apache to access new disk mount Find out why scp installed on igbquickload.org host lacks "-J" option Probably need to update this VM. Not sure how. Look into it.
            Hide
            ann.loraine Ann Loraine added a comment -

            New error:

            [root@cci-vm12 httpd]# service httpd start
            Starting httpd: (30)Read-only file system: httpd: could not open error log file /etc/httpd/logs/error_log.
            Unable to open logs
                                                                       [FAILED]
            

            Asked M. Cowan for assistance.
            Also, I need to find the VM host's public IP address.

            Show
            ann.loraine Ann Loraine added a comment - New error: [root@cci-vm12 httpd]# service httpd start Starting httpd: (30)Read-only file system: httpd: could not open error log file /etc/httpd/logs/error_log. Unable to open logs [FAILED] Asked M. Cowan for assistance. Also, I need to find the VM host's public IP address.
            Hide
            ann.loraine Ann Loraine added a comment -

            Logged in to new VM. 5 Tb storage appears to be missing. Not sure if I am in the right place, or not. Emailed MC for info.

            Show
            ann.loraine Ann Loraine added a comment - Logged in to new VM. 5 Tb storage appears to be missing. Not sure if I am in the right place, or not. Emailed MC for info.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Update:

            There are now two igbquickload hosts on UNCC infrastructure:

            1) old one - vm12, private IP address 10.16.57.232, accessed with

            ssh -J aloraine@cci-jump.uncc.edu -p 1657 aloraine@igbquickload.org
            

            or

            ssh -J aloraine@cci-jump.uncc.edu -p 1657 aloraine@10.16.57.232
            

            2) new one - cci-igb, private IP address 10.16.80.158, accessed with

            ssh aloraine@10.16.80.158
            

            Installed and started apache2 on cci-igb. Web server is active. No configurations done as yet.

            Waiting on Michael to configure the host to enable me to use the new 5 Tb of space.

            Show
            ann.loraine Ann Loraine added a comment - - edited Update: There are now two igbquickload hosts on UNCC infrastructure: 1) old one - vm12, private IP address 10.16.57.232, accessed with ssh -J aloraine@cci-jump.uncc.edu -p 1657 aloraine@igbquickload.org or ssh -J aloraine@cci-jump.uncc.edu -p 1657 aloraine@10.16.57.232 2) new one - cci-igb, private IP address 10.16.80.158, accessed with ssh aloraine@10.16.80.158 Installed and started apache2 on cci-igb. Web server is active. No configurations done as yet. Waiting on Michael to configure the host to enable me to use the new 5 Tb of space.
            Hide
            ann.loraine Ann Loraine added a comment -

            Public IP external address for cci-vm12

            Name: cci-vm12.uncc.edu
            Address: 152.15.236.217
            Name: cci-vm12.uncc.edu
            Address: 10.16.57.232

            Currently public IP for "igbquickload.org" is : 52.205.102.207 (an EC2 named "quickload.bioviz.org" in my account)

            Show
            ann.loraine Ann Loraine added a comment - Public IP external address for cci-vm12 Name: cci-vm12.uncc.edu Address: 152.15.236.217 Name: cci-vm12.uncc.edu Address: 10.16.57.232 Currently public IP for "igbquickload.org" is : 52.205.102.207 (an EC2 named "quickload.bioviz.org" in my account)
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            plan: move "mac address" associated with vm12 (old vm) over to the new one (cci-igb) to avoid having to get a new public IP address.

            Show
            ann.loraine Ann Loraine added a comment - - edited plan: move "mac address" associated with vm12 (old vm) over to the new one (cci-igb) to avoid having to get a new public IP address.
            Hide
            ann.loraine Ann Loraine added a comment -

            Notes:

            Saturday morning 5:00 am updates to OS's for the VMs.
            Make sure Apache is configured to restart when system reboots.

            e.g., 'service apache2 enable' or similar

            Show
            ann.loraine Ann Loraine added a comment - Notes: Saturday morning 5:00 am updates to OS's for the VMs. Make sure Apache is configured to restart when system reboots. e.g., 'service apache2 enable' or similar
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Suggestion from Michael Cowan for how to manage data transfers:

            Try "tailscale" - a very nice tool for moving data from place to place

            To use it, install client onto machines you would like to access. It functions like a vpn tunnel but with more fine-tuned management. It's a way to deal with lack of direct ssh/scp access. Uses "wireguard", a think that improves on vpn and is built into the Linux kernal.

            Can use any of the nodes of an egress node.

            Show
            ann.loraine Ann Loraine added a comment - - edited Suggestion from Michael Cowan for how to manage data transfers: Try "tailscale" - a very nice tool for moving data from place to place To use it, install client onto machines you would like to access. It functions like a vpn tunnel but with more fine-tuned management. It's a way to deal with lack of direct ssh/scp access. Uses "wireguard", a think that improves on vpn and is built into the Linux kernal. Can use any of the nodes of an egress node.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Other notes:

            Currently CCI is using "quobyt" on the slurm cluster in CCI. It is a storage solution that is flexible and can be configured by the user to accommodate whatever file system setup they need.

            He suggested we could use it for igb data hosting.

            Used term: "pciu4 fabric" (not sure what this is; sounds important to know about)

            CCI is looking at licensing products from https://lambdalabs.com, as many faculty have been purchasing / using it. This company re-packages open source software and sells it, kind of like Red Hat, maybe?

            Show
            ann.loraine Ann Loraine added a comment - - edited Other notes: Currently CCI is using "quobyt" on the slurm cluster in CCI. It is a storage solution that is flexible and can be configured by the user to accommodate whatever file system setup they need. He suggested we could use it for igb data hosting. Used term: "pciu4 fabric" (not sure what this is; sounds important to know about) CCI is looking at licensing products from https://lambdalabs.com , as many faculty have been purchasing / using it. This company re-packages open source software and sells it, kind of like Red Hat, maybe?
            Hide
            ann.loraine Ann Loraine added a comment -

            Update:

            Michael will make a change to the storage configuration and let me know when it's ready to be set up.

            Show
            ann.loraine Ann Loraine added a comment - Update: Michael will make a change to the storage configuration and let me know when it's ready to be set up.
            Hide
            ann.loraine Ann Loraine added a comment -

            New storage configuration:

            root@cci-igb:~# df -h
            Filesystem                      Size  Used Avail Use% Mounted on
            tmpfs                           1.2G  1.3M  1.2G   1% /run
            /dev/mapper/system_vg-root_lv    18G  3.3G   14G  20% /
            tmpfs                           5.9G     0  5.9G   0% /dev/shm
            tmpfs                           5.0M     0  5.0M   0% /run/lock
            /dev/sda1                       1.1G  6.1M  1.1G   1% /boot/efi
            /dev/mapper/system_vg-home_lv   5.9G   96K  5.6G   1% /home
            /dev/mapper/system_vg-opt_lv    7.8G   24K  7.4G   1% /opt
            /dev/mapper/system_vg-var_lv    7.8G  113M  7.3G   2% /var
            /dev/mapper/system_vg-lib_lv    7.8G  631M  6.8G   9% /var/lib
            /dev/mapper/data_vg-srv_lv      2.0T   28K  1.9T   1% /srv
            /dev/mapper/system_vg-log_lv    7.8G   62M  7.4G   1% /var/log
            tmpfs                           1.2G  4.0K  1.2G   1% /run/user/97303865
            /dev/mapper/data_vg-igbdata_lv  4.9T   28K  4.7T   1% /mnt/igbdata
            
            Show
            ann.loraine Ann Loraine added a comment - New storage configuration: root@cci-igb:~# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.2G 1.3M 1.2G 1% /run /dev/mapper/system_vg-root_lv 18G 3.3G 14G 20% / tmpfs 5.9G 0 5.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sda1 1.1G 6.1M 1.1G 1% /boot/efi /dev/mapper/system_vg-home_lv 5.9G 96K 5.6G 1% /home /dev/mapper/system_vg-opt_lv 7.8G 24K 7.4G 1% /opt /dev/mapper/system_vg-var_lv 7.8G 113M 7.3G 2% / var /dev/mapper/system_vg-lib_lv 7.8G 631M 6.8G 9% / var /lib /dev/mapper/data_vg-srv_lv 2.0T 28K 1.9T 1% /srv /dev/mapper/system_vg-log_lv 7.8G 62M 7.4G 1% / var /log tmpfs 1.2G 4.0K 1.2G 1% /run/user/97303865 /dev/mapper/data_vg-igbdata_lv 4.9T 28K 4.7T 1% /mnt/igbdata
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Logged into new host.
            Made new virtual hosts following instructions in https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-20-04.
            Virtual hosts:

            • data.bioviz.org
            • igbquickload.org

            Tested by editing my local /etc/hosts file to include:

            10.16.80.158 data.bioviz.org
            10.16.80.158 igbquickload.org
            10.16.80.158 www.igbquickload.org
            

            Noticed that the svn volume was unmounted on the EC2 host. Re-mounted following these notes.

            Logged into the new host and use svn to checkout the IGB quickload data repository:

            root@cci-igb:/mnt/igbdata# svn --username=guest co https://svn.bioviz.org/repos/genomes/quickload
            

            Note the location is /mnt/igbdata, the location with 5 Tb of data.

            Show
            ann.loraine Ann Loraine added a comment - - edited Logged into new host. Made new virtual hosts following instructions in https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-20-04 . Virtual hosts: data.bioviz.org igbquickload.org Tested by editing my local /etc/hosts file to include: 10.16.80.158 data.bioviz.org 10.16.80.158 igbquickload.org 10.16.80.158 www.igbquickload.org Noticed that the svn volume was unmounted on the EC2 host. Re-mounted following these notes . Logged into the new host and use svn to checkout the IGB quickload data repository: root@cci-igb:/mnt/igbdata# svn --username=guest co https: //svn.bioviz.org/repos/genomes/quickload Note the location is /mnt/igbdata, the location with 5 Tb of data.
            Hide
            ann.loraine Ann Loraine added a comment -

            contents of igbquickload (old vm) host:

            [aloraine@cci-vm12 htdocs]$ ls -lh
            total 16K
            drwxr-xr-x 6 root root 4.0K Nov 11  2018 autoindex_strapdown
            drwxr-xr-x 3 root root 4.0K Nov 11  2018 bar
            lrwxrwxrwx 1 root root   14 Nov 11  2018 blueberry -> /srv/blueberry
            lrwxrwxrwx 1 root root   12 Nov 11  2018 chipseq -> /srv/chipseq
            lrwxrwxrwx 1 root root   11 Nov 11  2018 dnaseq -> /srv/dnaseq
            -rw-r--r-- 1 root root  357 Nov 11  2018 index.html
            lrwxrwxrwx 1 root root   14 Nov 11  2018 quickload -> /srv/quickload
            lrwxrwxrwx 1 root root   11 Nov 11  2018 rnaseq -> /srv/rnaseq
            lrwxrwxrwx 1 root root   16 Nov 12  2018 schallerlab -> /srv/schallerlab
            lrwxrwxrwx 1 root root   29 Nov 11  2018 secureQuickloadTestSites -> /srv/secureQuickloadTestSites
            lrwxrwxrwx 1 root root   26 Nov 11  2018 smokeTestingQuickload -> /srv/smokeTestingQuickload
            lrwxrwxrwx 1 root root   11 Nov 12  2018 soyseq -> /srv/soyseq
            drwxr-xr-x 4 root root 4.0K Sep 23  2022 styling
            

            Moving smaller directories over to the new VM.

            Show
            ann.loraine Ann Loraine added a comment - contents of igbquickload (old vm) host: [aloraine@cci-vm12 htdocs]$ ls -lh total 16K drwxr-xr-x 6 root root 4.0K Nov 11 2018 autoindex_strapdown drwxr-xr-x 3 root root 4.0K Nov 11 2018 bar lrwxrwxrwx 1 root root 14 Nov 11 2018 blueberry -> /srv/blueberry lrwxrwxrwx 1 root root 12 Nov 11 2018 chipseq -> /srv/chipseq lrwxrwxrwx 1 root root 11 Nov 11 2018 dnaseq -> /srv/dnaseq -rw-r--r-- 1 root root 357 Nov 11 2018 index.html lrwxrwxrwx 1 root root 14 Nov 11 2018 quickload -> /srv/quickload lrwxrwxrwx 1 root root 11 Nov 11 2018 rnaseq -> /srv/rnaseq lrwxrwxrwx 1 root root 16 Nov 12 2018 schallerlab -> /srv/schallerlab lrwxrwxrwx 1 root root 29 Nov 11 2018 secureQuickloadTestSites -> /srv/secureQuickloadTestSites lrwxrwxrwx 1 root root 26 Nov 11 2018 smokeTestingQuickload -> /srv/smokeTestingQuickload lrwxrwxrwx 1 root root 11 Nov 12 2018 soyseq -> /srv/soyseq drwxr-xr-x 4 root root 4.0K Sep 23 2022 styling Moving smaller directories over to the new VM.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Copying hotpollen from RENCI host with:

            root@cci-igb:/mnt/igbdata# rsync -avzhP -e "ssh -J aloraine@hop.renci.org" aloraine@lorainelab-quickload.scidas.org:/projects/igbquickload/lorainelab/www/main/htdocs/hotpollen/ hotpollen
            

            onto:

            • The new VM
            • The lustre file system on the cluster
            Show
            ann.loraine Ann Loraine added a comment - - edited Copying hotpollen from RENCI host with: root@cci-igb:/mnt/igbdata# rsync -avzhP -e "ssh -J aloraine@hop.renci.org" aloraine@lorainelab-quickload.scidas.org:/projects/igbquickload/lorainelab/www/main/htdocs/hotpollen/ hotpollen onto: The new VM The lustre file system on the cluster
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Note:

            The new Quickload main directory needs a symbolic link from the web-facing root "/" to its "styling" subdirectory to enable strapdown presentation of directory contents.

            Show
            ann.loraine Ann Loraine added a comment - - edited Note: The new Quickload main directory needs a symbolic link from the web-facing root "/" to its "styling" subdirectory to enable strapdown presentation of directory contents.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Notes on today's work:

            • Requested Michael Cowan to switch the network as required to make the new VM public. He will do it at 1 pm tomorrow.
            • When I checked hotpollen quickload data transfer (via rsync from VM to renci) after scrum, about 1.3 T out of 1.5 T had transferred
            • The process doing the transfer is taking place in a "screen" session started the night before, user root.

            Notes from last night:

            • When I tried to access data.bioviz.org via https, the server failed to respond. Why? Need to review logs.
            Show
            ann.loraine Ann Loraine added a comment - - edited Notes on today's work: Requested Michael Cowan to switch the network as required to make the new VM public. He will do it at 1 pm tomorrow. When I checked hotpollen quickload data transfer (via rsync from VM to renci) after scrum, about 1.3 T out of 1.5 T had transferred The process doing the transfer is taking place in a "screen" session started the night before, user root. Notes from last night: When I tried to access data.bioviz.org via https, the server failed to respond. Why? Need to review logs.
            Hide
            ann.loraine Ann Loraine added a comment -

            Transfer complete:

            sent 49.29K bytes received 1.42T bytes 24.70M bytes/sec
            total size is 1.42T speedup is 1.00

            Show
            ann.loraine Ann Loraine added a comment - Transfer complete: sent 49.29K bytes received 1.42T bytes 24.70M bytes/sec total size is 1.42T speedup is 1.00
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Networking is updated.
            Logged into the new VM with:

            ssh -J aloraine@cci-jump.uncc.edu -p 1657 aloraine@10.16.57.232
            

            Opened my browser to "10.16.57.232" and observed the index.html page set up for data.bioviz.org.
            Visiting http://10.16.57.232/quickload/ or http://10.16.57.232/hotpollen opened the expected locations.

            Visiting the public IP address opened nothing, no doubt because I'm on the UNC Charlotte network, a network that does not allow users within the network to access Web sites hosted by computers within the network unless their private IP addresses are used. The public IP addresses for these hosts are not recognized by computers within the network.

            Show
            ann.loraine Ann Loraine added a comment - - edited Networking is updated. Logged into the new VM with: ssh -J aloraine@cci-jump.uncc.edu -p 1657 aloraine@10.16.57.232 Opened my browser to "10.16.57.232" and observed the index.html page set up for data.bioviz.org. Visiting http://10.16.57.232/quickload/ or http://10.16.57.232/hotpollen opened the expected locations. Visiting the public IP address opened nothing, no doubt because I'm on the UNC Charlotte network, a network that does not allow users within the network to access Web sites hosted by computers within the network unless their private IP addresses are used. The public IP addresses for these hosts are not recognized by computers within the network.
            Hide
            ann.loraine Ann Loraine added a comment -

            Next steps:

            • Replicate all data hosted currently on the "igbquickload.org" EC2 host onto the new VM at UNC Charlotte.
            • Once that is done, switch the IP address for "igbquickload.org" domain to the new VM's public facing IP address.
            Show
            ann.loraine Ann Loraine added a comment - Next steps: Replicate all data hosted currently on the "igbquickload.org" EC2 host onto the new VM at UNC Charlotte. Once that is done, switch the IP address for "igbquickload.org" domain to the new VM's public facing IP address.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Something has changed. I now can log into the new VM with:

            • ssh aloraine@10.16.57.232

            No password required as my key is already installed in my account's "authorized hosts" file.

            Commencing to replicate igbquickload.org content on the host.

            Required directories:

            • chipseq - DONE
            • dnaseq - DONE
            • quickload - DONE
            • rnaseq - in progress

            Interesting. File transfer rate is much faster. Now it's >150MB/s versus 20MB/s yesterday.

            Show
            ann.loraine Ann Loraine added a comment - - edited Something has changed. I now can log into the new VM with: ssh aloraine@10.16.57.232 No password required as my key is already installed in my account's "authorized hosts" file. Commencing to replicate igbquickload.org content on the host. Required directories: chipseq - DONE dnaseq - DONE quickload - DONE rnaseq - in progress Interesting. File transfer rate is much faster. Now it's >150MB/s versus 20MB/s yesterday.
            Hide
            ann.loraine Ann Loraine added a comment -

            "rnaseq" is nearly finished. Decided to switch igbquickload.org to 152.15.236.217.

            Show
            ann.loraine Ann Loraine added a comment - "rnaseq" is nearly finished. Decided to switch igbquickload.org to 152.15.236.217.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            "rnaseq" is finished with the following report:

            sent 45.73M bytes received 275.21G bytes 18.61M bytes/sec
            total size is 572.68G speedup is 2.08

            This is a little confusing because when I changed into the source directory on the RENCI host (/projects/igbquickload/lorainelab/www/main/htdocs/rnaseq) and executed "du -h", the total size of the current working directory (".") was reported as 655G. However, when I ran the same command in the target directory on the new VM (/mnt/igbdata/rnaseq), its size was reported as 534G. Why the different sizes?

            Looking into it a bit more...

            Show
            ann.loraine Ann Loraine added a comment - - edited "rnaseq" is finished with the following report: sent 45.73M bytes received 275.21G bytes 18.61M bytes/sec total size is 572.68G speedup is 2.08 This is a little confusing because when I changed into the source directory on the RENCI host (/projects/igbquickload/lorainelab/www/main/htdocs/rnaseq) and executed "du -h", the total size of the current working directory (".") was reported as 655G. However, when I ran the same command in the target directory on the new VM (/mnt/igbdata/rnaseq), its size was reported as 534G. Why the different sizes? Looking into it a bit more...
            Hide
            ann.loraine Ann Loraine added a comment -

            Set up SSL.

            Closing as this is now done.

            Show
            ann.loraine Ann Loraine added a comment - Set up SSL. Closing as this is now done.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            Copied blueberry data with:

            data.bioviz.org root$ rsync -avzhP -e "ssh -J aloraine@hop.renci.org" aloraine@lorainelab-quickload.scidas.org:/projects/igbquickload/lorainelab/www/main/htdocs/blueberry/ blueberry 
            

            Copied soyseq data with:

            rsync -avzhP -e "ssh -J aloraine@hop.renci.org" aloraine@lorainelab-quickload.scidas.org:/projects/igbquickload/lorainelab/www/main/htdocs/soyseq/ soyseq
            
            Show
            ann.loraine Ann Loraine added a comment - - edited Copied blueberry data with: data.bioviz.org root$ rsync -avzhP -e "ssh -J aloraine@hop.renci.org" aloraine@lorainelab-quickload.scidas.org:/projects/igbquickload/lorainelab/www/main/htdocs/blueberry/ blueberry Copied soyseq data with: rsync -avzhP -e "ssh -J aloraine@hop.renci.org" aloraine@lorainelab-quickload.scidas.org:/projects/igbquickload/lorainelab/www/main/htdocs/soyseq/ soyseq

              People

              • Assignee:
                ann.loraine Ann Loraine
                Reporter:
                ann.loraine Ann Loraine
              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: