Details

    • Type: Task
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Done
    • Affects Version/s: None
    • Fix Version/s: None
    • Labels:
      None

      Description

      Situation: The wiki.bioviz.org website has been acting oddly or has been down since August 9 while I was working on the IGB release testing subsystem pages.

      Task: Fix the issue with the wiki.bioviz.org confluence website.

        Attachments

          Issue Links

            Activity

            Hide
            ann.loraine Ann Loraine added a comment -

            Rebooted confluence. Removed backup files to make more space on the server.

            I need to create a script that will remove these backup files after they have been copied to the S3 bucket.

            Show
            ann.loraine Ann Loraine added a comment - Rebooted confluence. Removed backup files to make more space on the server. I need to create a script that will remove these backup files after they have been copied to the S3 bucket.
            Hide
            ann.loraine Ann Loraine added a comment -

            Added deletion of local backup files. To test, I will check that the backup file is not on the host tomorrow when the cron job runs again.
            Now checking into deleting old backups from the S3 bucket.

            Show
            ann.loraine Ann Loraine added a comment - Added deletion of local backup files. To test, I will check that the backup file is not on the host tomorrow when the cron job runs again. Now checking into deleting old backups from the S3 bucket.
            Hide
            ann.loraine Ann Loraine added a comment -

            Decided to not delete old backups as storage costs are currently manageable. But did delete older stuff.
            Scripts used for backing up are sync'd to the S3 bucket. I used this command to do it:

            jira.bioviz.org ec2-user $ aws s3 sync --exclude '*~' --exclude '.*' $HOME/backup s3://lorainelab-backups/backups/scripts
            

            Notes:

            • The --exclude option ensures that no hidden directory and no emacs backup files get copied over to the s3 bucket.
            • The ec2 and s3 buckets are configured to allow these aws commands to interact with the s3 bucket. Thanks to this, we don't need to store aws credentials on the s3 bucket. However, if we re-deploy Jira and confluence onto a new host, we will need to replicate the configuration to ensure the backup strategy will still function.
            Show
            ann.loraine Ann Loraine added a comment - Decided to not delete old backups as storage costs are currently manageable. But did delete older stuff. Scripts used for backing up are sync'd to the S3 bucket. I used this command to do it: jira.bioviz.org ec2-user $ aws s3 sync --exclude '*~' --exclude '.*' $HOME/backup s3: //lorainelab-backups/backups/scripts Notes: The --exclude option ensures that no hidden directory and no emacs backup files get copied over to the s3 bucket. The ec2 and s3 buckets are configured to allow these aws commands to interact with the s3 bucket. Thanks to this, we don't need to store aws credentials on the s3 bucket. However, if we re-deploy Jira and confluence onto a new host, we will need to replicate the configuration to ensure the backup strategy will still function.
            Hide
            ann.loraine Ann Loraine added a comment - - edited

            For reference, here are the scripts:

            backup.sh:

            #!/bin/bash
            
            ###
            ### Find out what day this is.
            ###
            d="$(date +%F)"
            
            ### Tip: for testing, reset "d" 
            #d="testing4"
            
            ###
            ### Make back up file for Jira database.
            ### Doing this first to ensure that anything mentioned
            ### in the SQL already exists in the Jira home
            ### location. Maybe later we can shutdown access to the
            ### host while this is going on. But I don't think this
            ### is needed right now, considering we only have a few
            ### users with write access to Jira, currently.
            ###
            s="jira_$d.sql.gz"
            echo "Making back up file $s for Jira database." 
            fname="$HOME/backups/jiraSQL/$s"
            mysqldump -u jira -pjira jira --no-tablespaces | gzip -9 > $fname
            
            ###
            ### Make back up file for Jira home directory. 
            ###
            s="jiraHome_$d.tar.gz"
            echo "Making back up file $s for Jira home directory."
            fname="$HOME/backups/jiraHome/$s"
            tar czfP $fname -C ../jiradata jiraHome
            
            ###
            ### Copy Jira database back up file to s3.
            ###
            s="jira_$d.sql.gz"
            fname="$HOME/backups/jiraSQL/$s"
            echo "Copying $s to s3, unless an identical object with the same name already exists in the s3 bucket."
            aws s3 sync --exclude '*' --include "$s" $HOME/backups/jiraSQL s3://lorainelab-backups/backups/jiraSQL
            echo "Completed copy of $s to s3."
            echo "Checking that an object called $s exists in s3."
            VAR=$(aws s3 ls s3://lorainelab-backups/backups/jiraSQL/$s)
            if [ -z "${VAR}" ]; then
                echo "Backup to s3 failed for file $s."
            else 
              echo "An object named $s exists in s3 bucket, so removing local copy $fname."
              CMD="rm $fname"
              $CMD
              echo "Backup of $s succeeded."
            fi
            
            ###
            ### Copy Jira home back up file to s3.
            ###
            s="jiraHome_$d.tar.gz"
            fname="$HOME/backups/jiraHome/$s"
            echo "Copying $s to s3, unless an identical object with the same name already exists in the s3 bucket."
            aws s3 sync --exclude '*' --include "$s" $HOME/backups/jiraHome s3://lorainelab-backups/backups/jiraHome
            echo "Completed copy of $s to s3."
            echo "Checking that an object called $s exists in the s3 bucket."
            VAR=$(aws s3 ls s3://lorainelab-backups/backups/jiraHome/$s)
            if [ -z "${VAR}" ]; then
                echo "Backup to s3 failed for file $s."
            else 
              echo "An object named $s exists in the s3 bucket, so removing local copy $fname."
              CMD="rm $fname"
              $CMD
              echo "Backup of $s succeeded."
            fi
            

            chron.sh:

            #!/bin/bash
            report="$(date --iso-8601).txt"
            cd /home/ec2-user/backup
            ./backup.sh 2> /dev/null 1> ../backups/reports/$report
            
            Show
            ann.loraine Ann Loraine added a comment - - edited For reference, here are the scripts: backup.sh: #!/bin/bash ### ### Find out what day this is. ### d= "$(date +%F)" ### Tip: for testing, reset "d" #d= "testing4" ### ### Make back up file for Jira database. ### Doing this first to ensure that anything mentioned ### in the SQL already exists in the Jira home ### location. Maybe later we can shutdown access to the ### host while this is going on. But I don't think this ### is needed right now, considering we only have a few ### users with write access to Jira, currently. ### s= "jira_$d.sql.gz" echo "Making back up file $s for Jira database." fname= "$HOME/backups/jiraSQL/$s" mysqldump -u jira -pjira jira --no-tablespaces | gzip -9 > $fname ### ### Make back up file for Jira home directory. ### s= "jiraHome_$d.tar.gz" echo "Making back up file $s for Jira home directory." fname= "$HOME/backups/jiraHome/$s" tar czfP $fname -C ../jiradata jiraHome ### ### Copy Jira database back up file to s3. ### s= "jira_$d.sql.gz" fname= "$HOME/backups/jiraSQL/$s" echo "Copying $s to s3, unless an identical object with the same name already exists in the s3 bucket." aws s3 sync --exclude '*' --include "$s" $HOME/backups/jiraSQL s3: //lorainelab-backups/backups/jiraSQL echo "Completed copy of $s to s3." echo "Checking that an object called $s exists in s3." VAR=$(aws s3 ls s3: //lorainelab-backups/backups/jiraSQL/$s) if [ -z "${VAR}" ]; then echo "Backup to s3 failed for file $s." else echo "An object named $s exists in s3 bucket, so removing local copy $fname." CMD= "rm $fname" $CMD echo "Backup of $s succeeded." fi ### ### Copy Jira home back up file to s3. ### s= "jiraHome_$d.tar.gz" fname= "$HOME/backups/jiraHome/$s" echo "Copying $s to s3, unless an identical object with the same name already exists in the s3 bucket." aws s3 sync --exclude '*' --include "$s" $HOME/backups/jiraHome s3: //lorainelab-backups/backups/jiraHome echo "Completed copy of $s to s3." echo "Checking that an object called $s exists in the s3 bucket." VAR=$(aws s3 ls s3: //lorainelab-backups/backups/jiraHome/$s) if [ -z "${VAR}" ]; then echo "Backup to s3 failed for file $s." else echo "An object named $s exists in the s3 bucket, so removing local copy $fname." CMD= "rm $fname" $CMD echo "Backup of $s succeeded." fi chron.sh: #!/bin/bash report= "$(date --iso-8601).txt" cd /home/ec2-user/backup ./backup.sh 2> /dev/ null 1> ../backups/reports/$report
            Hide
            ann.loraine Ann Loraine added a comment -

            Committed backup scripts to https://bitbucket.org/lorainelab/backup in addition to backing them up in "backup/scripts" of Loraine Lab backups S3 bucket.

            Moving to Done.

            Show
            ann.loraine Ann Loraine added a comment - Committed backup scripts to https://bitbucket.org/lorainelab/backup in addition to backing them up in "backup/scripts" of Loraine Lab backups S3 bucket. Moving to Done.
            Hide
            ann.loraine Ann Loraine added a comment -

            Update:

            • Backup files were generated as expected and copied to the S3 bucket early this morning.
            • Report file was copied to backups/reports on the EC2 host for jira
            • Backup files for Jira database and home directory were deleted from the host as intended once copy to S3 bucket was confirmed via the aws cli.
            Show
            ann.loraine Ann Loraine added a comment - Update: Backup files were generated as expected and copied to the S3 bucket early this morning. Report file was copied to backups/reports on the EC2 host for jira Backup files for Jira database and home directory were deleted from the host as intended once copy to S3 bucket was confirmed via the aws cli.

              People

              • Assignee:
                ann.loraine Ann Loraine
                Reporter:
                nfreese Nowlan Freese
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: