AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:<20090304144052.727b8f82@ripper.onstor.net>
CFG:
PT:0
S:andy.sharp@onstor.com
RQ:
SSV:exch1.onstor.net
NSV:
SSH:
R:<brian.stark@onstor.com>
MAID:1
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/andys@onstor.net@exch1.onstor.net/INBOX	0	2779531E7C760D4491C96305019FEEB518521AC663@exch1.onstor.net
X-Sylpheed-End-Special-Headers: 1
Date: Wed, 4 Mar 2009 14:41:09 -0800
From: Andrew Sharp <andy.sharp@onstor.com>
To: Brian Stark <brian.stark@onstor.com>
Subject: Re: Odd Cougar boot up message - irq 56 - Prom revision feature or
 a problem? - case 11602
Message-ID: <20090304144109.68db770f@ripper.onstor.net>
In-Reply-To: <2779531E7C760D4491C96305019FEEB518521AC663@exch1.onstor.net>
References: <200902131508.n1DF8mD04517@mailhost-rtp.css.glasshouse.com>
	<20090214161028.6dca7ce2@ripper.onstor.net>
	<2779531E7C760D4491C96305019FEEB518521AC663@exch1.onstor.net>
Organization: Onstor
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

It was reported to me that they reseated the blade in question and this
problem went away.

On Wed, 4 Mar 2009 13:32:12 -0800 Brian Stark <brian.stark@onstor.com>
wrote:

> Could this be related to the case at Infinium?  It looks like it may
> be based on the logs that Rich sent from Infinium.  Could this be
> another case where the CF vendor changed something?
> 
> 
> Brian
> 
> 
> -----Original Message-----
> From: Andy Sharp 
> Sent: Saturday, February 14, 2009 4:10 PM
> To: Alan Cooke (Glasshouse)
> Cc: dl-cstech
> Subject: Re: Odd Cougar boot up message - irq 56 - Prom revision
> feature or a problem? - case 11602
> 
> While I am not allowed to comment on PROMs anymore, I will go out on a
> limb and say this definitely has nothing to do with 1.0.10 PROM.  It
> is most likely a bad CF card.  If not a bad CF card, then a bad blade.
> Smart money says it's a bad CF card.  Can't tell which one from the
> snippets you've included here, but I'm going to make a wild guess and
> say it's not the one you're booting off of.   ~:^)
> 
> Cheers,
> 
> a
> 
> 
> On Fri, 13 Feb 2009 07:09:15 -0800 "Alan Cooke (Glasshouse)"
> <acooke@css.glasshouse.com> wrote:
> 
> > Hi All,
> > There is a new Cougar install at a customer in the UK - Sangar
> > Centre
> > - where Alistair Stewart saw an unexpected set of messages from one
> > blade at boot up.
> > 
> > There is load of what looks like error messages thrown up about irq
> > 56. Boots from three other systems, the lab system in Japan, UK and
> > the GH lab system do not show these messages but I noticed that
> > these 3 lab systems all have PROM rev 1.0.8 and the customer's
> > system has PROM rev 1.0.10.
> > 
> > Is this the reason for the differences or is something really broken
> > or moaning for a good reason. The customer's system boots up OK.
> > 
> > From the lab boxes:
> > ONStor Inc. PROM_SIBYTE_CG : Cougar-prom-1.0.8 : Thu Jul 31 17:59:23
> > 2008
> > 
> > From the customer's system:
> > ONStor Inc. PROM_SIBYTE : prom-1.0.10 : Tue Jan 13 17:53:22 2009
> > 
> > From the lab systems boot up:
> > Yenta TI: socket 0000:00:07.0, mfunc 0x00000022, devctl 0x60  
> > Yenta: ISA IRQ mask 0x0000, PCI irq 56
> > Socket status: 30000059
> > pcmcia: parent PCI bridge I/O window: 0x0 - 0x1ffffff
> > 
> > From the customer's system:
> > Yenta TI: socket 0000:00:07.0, mfunc 0x00000022, devctl 0x60
> > irq 56: nobody cared (try booting with the "irqpoll" option)
> > Call Trace:
> > [<ffffffff82007170>] dump_stack+0x8/0x38
> > [<ffffffff82050580>] __report_bad_irq+0x40/0xd8
> > [<ffffffff820508a8>] note_interrupt+0x290/0x2d8
> > [<ffffffff8204f7d0>] __do_IRQ+0x140/0x160
> > [<ffffffff820011a4>] plat_irq_dispatch+0x1e4/0x1f0
> > [<ffffffff82001840>] ret_from_irq+0x0/0x4
> > [<ffffffff8202a4b0>] __do_softirq+0x70/0x140
> > [<ffffffff8202a610>] do_softirq+0x90/0x98
> > [<ffffffff82001840>] ret_from_irq+0x0/0x4
> > [<ffffffff8204fdf0>] setup_irq+0x178/0x2a0
> > [<ffffffff82050008>] request_irq+0xf0/0x110
> > [<ffffffff82189b4c>] yenta_probe_cb_irq+0x64/0x120
> > [<ffffffff8218a0dc>] ti12xx_override+0x15c/0x6b0
> > [<ffffffff8218b684>] yenta_probe+0x59c/0x6f0
> > [<ffffffff82138544>] pci_device_probe+0x84/0xa8
> > [<ffffffff82150084>] driver_probe_device+0xa4/0x220
> > [<ffffffff8215043c>] __driver_attach+0xfc/0x148
> > [<ffffffff8214ef58>] bus_for_each_dev+0x58/0xb8
> > [<ffffffff8214f49c>] bus_add_driver+0xb4/0x230
> > [<ffffffff82138770>] __pci_register_driver+0x58/0xb0
> > [<ffffffff822d46a8>] kernel_init+0xd0/0x2f8
> > [<ffffffff82003930>] kernel_thread_helper+0x10/0x18
> > 
> > handlers:
> > [<ffffffff8218b910>] (yenta_probe_handler+0x0/0x58)
> > Disabling IRQ #56
> > irq 56: nobody cared (try booting with the "irqpoll" option)
> > Call Trace:
> > [<ffffffff82007170>] dump_stack+0x8/0x38
> > [<ffffffff82050580>] __report_bad_irq+0x40/0xd8
> > [<ffffffff820508a8>] note_interrupt+0x290/0x2d8
> > [<ffffffff8204f7d0>] __do_IRQ+0x140/0x160
> > [<ffffffff820011a4>] plat_irq_dispatch+0x1e4/0x1f0
> > [<ffffffff82001840>] ret_from_irq+0x0/0x4
> > [<ffffffff8202a4b0>] __do_softirq+0x70/0x140
> > [<ffffffff8202a610>] do_softirq+0x90/0x98
> > [<ffffffff82001840>] ret_from_irq+0x0/0x4
> > [<ffffffff8204fdf0>] setup_irq+0x178/0x2a0
> > [<ffffffff82050008>] request_irq+0xf0/0x110
> > [<ffffffff8218b64c>] yenta_probe+0x564/0x6f0
> > [<ffffffff82138544>] pci_device_probe+0x84/0xa8
> > [<ffffffff82150084>] driver_probe_device+0xa4/0x220
> > [<ffffffff8215043c>] __driver_attach+0xfc/0x148
> > [<ffffffff8214ef58>] bus_for_each_dev+0x58/0xb8
> > [<ffffffff8214f49c>] bus_add_driver+0xb4/0x230
> > [<ffffffff82138770>] __pci_register_driver+0x58/0xb0
> > [<ffffffff822d46a8>] kernel_init+0xd0/0x2f8
> > [<ffffffff82003930>] kernel_thread_helper+0x10/0x18
> > 
> > handlers:
> > [<ffffffff8218afd0>] (yenta_interrupt+0x0/0x118)
> > Disabling IRQ #56
> > Yenta: ISA IRQ mask 0x0000, PCI irq 56
> > 
> > Is this a "feature" of the 1.0.10 PROM?
> > Is something set which should not be?
> > 
> > Any info appreciated.
> > 
> > Regards . Alan.
> > 
> > 
