AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:<20080603042103.1c1f4a9c@ripper.onstor.net>
CFG:
PT:0
S:andy.sharp@onstor.com
RQ:
SSV:onstor-exch02.onstor.net
NSV:
SSH:
R:<john.rogers@onstor.com>
MAID:1
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/andys@onstor.net@onstor-exch02.onstor.net/INBOX	0	BB375AF679D4A34E9CA8DFA650E2B04E09C424CC@onstor-exch02.onstor.net
X-Sylpheed-End-Special-Headers: 1
Date: Tue, 3 Jun 2008 04:21:09 -0700
From: Andrew Sharp <andy.sharp@onstor.com>
To: "John Rogers" <john.rogers@onstor.com>
Subject: Re: Mightydog Weekly Meeting 052808
Message-ID: <20080603042109.5646123d@ripper.onstor.net>
In-Reply-To: <BB375AF679D4A34E9CA8DFA650E2B04E09C424CC@onstor-exch02.onstor.net>
References: <BB375AF679D4A34E9CA8DFA650E2B04E09C424C3@onstor-exch02.onstor.net>
	<BB375AF679D4A34E9CA8DFA650E2B04E09C424CC@onstor-exch02.onstor.net>
Organization: Onstor
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

I recommend that you wait a couple of hours on that, as the nightly
build machine is still running the nightly build, and taking this
offline would kill that.

On Tue, 3 Jun 2008 04:15:00 -0700 "John Rogers"
<john.rogers@onstor.com> wrote:

> Reminder nx-d_buildup is going offline for eek cleanup, this is
> commonly known as /n/build-trees. Its going offline right now.
> 
>  
> 
> ________________________________
> 
> From: John Rogers 
> Sent: Monday, June 02, 2008 8:04 PM
> To: John Rogers; dl-Cougar
> Cc: dl-mightydog-alert
> Subject: RE: Mightydog Weekly Meeting 052808
> 
>  
> 
> Reminder s_eng which contains the shares for software, hardware, qe,
> and techpubs is going offline for one more eek. Right now.
> 
>  
> 
> ________________________________
> 
> From: John Rogers 
> Sent: Monday, June 02, 2008 4:38 PM
> To: John Rogers; dl-Cougar
> Cc: dl-mightydog-alert
> Subject: RE: Mightydog Weekly Meeting 052808
> 
>  
> 
> Current Status of data recovery and 3.3 readyness:
> 
> Summary of scheduled events:
> 
> 8:00pm Mon eek of s_eng
> 
> 4:00am Tues eek of nx-d_buildup.
> 
>  
> 
> Eeking and Mirroring:
> 
>  
> 
> Volume: new_eng
> 
> Description: This is the mirror that we have to restore data from. It
> needs to be eek'ed before we can get the data off of it.
> 
> Action Plan:
> 
> Eek, promote, restore data with robocopy. Brian to write up the
> robocopy plan.
> 
> Current Status:
> 
> promoted, eeked, ready for data restore.
> 
> !New! - Dry run completed from new_eng to s_eng. We have found that we
> need to complete an additional eek on s_eng before restoring data in
> order to fix the super block and report the correct data size. An eek
> will be performed on s_eng tonight at 8:00pm
> 
>  
> 
> Volume: mirror: new_corp, source: s_corp
> 
> Description: This is corporate data, it holds Marketing, Operations
> and ISO9000 Quality data. Both volumes have minor corruption, which
> should be trivial to fix.
> 
> Action plan:
> 
> Eek the source, then do mirror transfer
> 
> Current Status:
> 
> eeked and new mirror created. complete.
> 
>  
> 
> Volume: mirror:new_home, source: s_home
> 
> Description: This is user home directories, mostly impacts engineering
> dev dept. This has minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> eek the source, then do mirror transfer
> 
> Current Status:
> 
> eeked source, regular schedule mirror stansfer complete. complete.
> 
>  
> 
> Volume: nx_d_buildup
> 
> Description: This volume is a primary volume and has no mirror. This
> has minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> Eek the volume and create a mirror
> 
> Current Status: 
> 
> not started
> 
> !New! - Scheduled downtime for /n/build-trees. Eek will be performed
> at 4:00am Tuesday.
> 
>  
> 
> Volume: nx_sysadmin
> 
> Descrition: this volume holds IT data, somewhat non-critical. This has
> minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> Eek the volume, no mirror required.
> 
> Current Status: 
> 
> eeked and mirrored. complete.
> 
>  
> 
> Volume: s_buildup
> 
> Description: This was the original source volume for build-trees.
> Apparently there has been a divergence of data between, nx-d_buildup.
> Data protection required on both volumes, until the mess is sorted
> out. This has minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> Eek the volume, create a mirror
> 
> Current status:
> 
> !New! - eek completed.
> 
>  
> 
> Volume:s_phd
> 
> Description: this holds customer support data, the data warehouse.
> Storage for mirror is available, need to re-create the mirror. This
> has minor corruption, fixing should be trivial
> 
> Action Plan
> 
> Eek the volume, create a mirror
> 
> Current Status:
> 
> eeked, regular scheudled mirror completed. complete.
> 
>  
> 
> Volume: vmfs1
> 
> Description: IT testing
> 
> Action plan: 
> 
> delete the volume
> 
> Current Status:
> 
> eeked.
> 
>  
> 
> Storage Migration:
> 
> The team decided it would be appropriate to migrate to the new
> storage, once the volumes are clean and mirrors complete. Raj to
> write procedure for promoting mirror's and moving the old source
> volumes to be the mirror volumes.
> 
> Current Status:
> 
> Not Started
> 
>  
> 
>  
> 
> ________________________________
> 
> From: John Rogers 
> Sent: Friday, May 30, 2008 3:16 PM
> To: dl-Cougar
> Subject: FW: Mightydog Weekly Meeting 052808
> 
>  
> 
> Here are the requirements we have set forward to be complete prior to
> 3.3 upgrade. Basically we want all volumes to come up clean from eek
> and all mirrors to be re-synced or created where required.
> 
>  
> 
> If you have questions, comments or suggestions please let me know. We
> have already begun executing this list. I think we should be looking
> good early next week. I will provide an overall status after the
> weekend.
> 
>  
> 
> ________________________________
> 
> From: John Rogers 
> Sent: Wednesday, May 28, 2008 1:31 PM
> To: dl-mightydog-alert
> Subject: Mightydog Weekly Meeting 052808
> 
>  
> 
> Hi all,
> 
>  
> 
> Here is the action plan from today's meeting:
> 
>  
> 
> Raj to provide eek syntax for running eek in these cases.
> 
>  
> 
> Eeking and Mirroring:
> 
>  
> 
> Volume: new_eng
> 
> Description: This is the mirror that we have to restore data from. It
> needs to be eek'ed before we can get the data off of it.
> 
> Action Plan:
> 
> Eek, promote, restore data with robocopy. Brian to write up the
> robocopy plan.
> 
>  
> 
> Volume: mirror: new_corp, source: s_corp
> 
> Description: This is corporate data, it holds Marketing, Operations
> and ISO9000 Quality data. Both volumes have minor corruption, which
> should be trivial to fix.
> 
> Action plan:
> 
> Eek the source, then do mirror transfer
> 
>  
> 
> Volume: mirror:new_home, source: s_home
> 
> Description: This is user home directories, mostly impacts engineering
> dev dept. This has minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> eek the source, then do mirror transfer
> 
>  
> 
> Volume: nx_d_buildup
> 
> Description: This volume is a primary volume and has no mirror. This
> has minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> Eek the volume and create a mirror
> 
>  
> 
> Volume: nx_sysadmin
> 
> Descrition: this volume holds IT data, somewhat non-critical. This has
> minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> Eek the volume, no mirror required.
> 
>  
> 
> Volume: s_buildup
> 
> Description: This was the original source volume for build-trees.
> Apparently there has been a divergence of data between, nx-d_buildup.
> Data protection required on both volumes, until the mess is sorted
> out. This has minor corruption, fixing should be trivial
> 
> Action Plan:
> 
> Eek the volume, create a mirror
> 
>  
> 
> Volume:s_phd
> 
> Description: this holds customer support data, the data warehouse.
> Storage for mirror is available, need to re-create the mirror. This
> has minor corruption, fixing should be trivial
> 
> Action Plan
> 
> Eek the volume, create a mirror
> 
>  
> 
> Volume: vmfs1
> 
> Description: IT testing
> 
> Action plan: 
> 
> delete the volume
> 
>  
> 
> Storage Migration:
> 
>  
> 
> The team decided it would be appropriate to migrate to the new
> storage, once the volumes are clean and mirrors complete. Raj to
> write procedure for promoting mirror's and moving the old source
> volumes to be the mirror volumes.
> 
>  
> 
