AF:
NF:0
PS:10
SRH:1
SFN:
DSR:
MID:
CFG:
PT:0
S:andy.sharp@onstor.com
RQ:
SSV:exch1.onstor.net
NSV:
SSH:
R:<John.Keiffer@onstor.com>,<sandrine.boulanger@onstor.com>
MAID:1
X-Sylpheed-Privacy-System:
X-Sylpheed-Sign:0
SCF:#mh/Mailbox/sent
RMID:#imap/andys@onstor.net@exch1.onstor.net/INBOX	0	2779531E7C760D4491C96305019FEEB51851D417E7@exch1.onstor.net
X-Sylpheed-End-Special-Headers: 1
Date: Thu, 29 Jan 2009 13:25:12 -0800
From: Andrew Sharp <andy.sharp@onstor.com>
To: John Keiffer <John.Keiffer@onstor.com>
Cc: Sandrine Boulanger <sandrine.boulanger@onstor.com>
Subject: Re: [OnStorDev] #136: idmpd appears to be taking a lot of memory,
 when it shouldn't be doing much of anything...
Message-ID: <20090129132512.75948464@ripper.onstor.net>
In-Reply-To: <2779531E7C760D4491C96305019FEEB51851D417E7@exch1.onstor.net>
References: <2779531E7C760D4491C96305019FEEB51851D417E7@exch1.onstor.net>
Organization: Onstor
X-Mailer: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Excellent, as usual....

On Thu, 29 Jan 2009 12:56:26 -0800 John Keiffer
<John.Keiffer@onstor.com> wrote:

> 
> Posted for you Andy. Let me know if this isn't exactly how you would
> have liked me to phrase it...
> 
> 
> -----Original Message-----
> From: nbts@nexenta.com [mailto:nbts@nexenta.com] 
> Sent: Thursday, January 29, 2009 12:06 PM
> Cc: support@nexenta.com; dl-leopard-defects
> Subject: Re: [OnStorDev] #136: idmpd appears to be taking a lot of
> memory, when it shouldn't be doing much of anything...
> 
> #136: idmpd appears to be taking a lot of memory, when it shouldn't
> be doing much of anything...
> -------------------------------+--------------------------------------------
>   Reporter:  onstor            |       Owner:  somebody
>       Type:  defect            |      Status:  new     
>   Priority:  Highly Desirable  |   Milestone:          
>  Component:  NMS               |     Version:  1.1.3   
> Resolution:                    |    Keywords:          
> -------------------------------+--------------------------------------------
> Old description:
> 
> > We where running Leopard-2 with only 4GB of memory. This is about
> > half what we will run typically. We then started to run an Exchange
> > Load Generator to some zvols on Leopard-2. We got memory triggers
> > that faulted, which we expected. However, someone noticed that
> > idmapd appears to be taking to much memory when it shouldn't really
> > be doing anything. Below is the comment and the memory email we
> > received.
> >
> > -----Original Message-----
> > From: Andy Sharp
> > Sent: Thursday, January 29, 2009 10:40 AM
> > To: John Keiffer
> > Cc: dl-Leopard Core Team
> > Subject: Re: [NMS Report] NOTICE: host Leopard-2
> >
> > That's incredibly useful information.  It tells me that something is
> > quite out of sorts if idmapd is so huge.  In this case, idmapd
> > shouldn't even be doing anything since we're just supplying iSCSI
> > to the Windows machine; there's no id mapping going on.  Going to a
> > 8GB machine might paper over the issue but we should try to get to
> > the bottom of it, or at least file a bug with Nexenta on it.
> >
> > Cheers,
> >
> > a
> >
> > On Thu, 29 Jan 2009 10:27:22 -0800 John Keiffer
> > <John.Keiffer@onstor.com> wrote:
> >
> > > Example of auto-support email.
> > >
> > > -----Original Message-----
> > > From: myhost-noreply@Leopard-2 [mailto:myhost-noreply@Leopard-2]
> > > Sent: Tuesday, January 27, 2009 7:12 PM
> > > To: John Keiffer; Patrick Haverty
> > > Subject: [NMS Report] NOTICE: host Leopard-2
> > >
> > >
> > > FAULT:
> > > **********************************************************************
> > > FAULT: Appliance   : Leopard-2 (OS v1.1.3b104, NMS v1.1.3-1)
> > > FAULT: Machine SIG : G4A99KA8 FAULT: Primary MAC :
> > > 0:1e:c9:2e:18:e2 FAULT: Time        : Tue Jan 27 19:12:14 2009
> > > FAULT: Trigger     : memory-check
> > > FAULT: Fault Type  : ALARM
> > > FAULT: Fault ID    : 3
> > > FAULT: Fault Count : 5
> > > FAULT: Severity    : NOTICE
> > > FAULT: Description : The appliance is low on free memory - remains
> > > less than FAULT:             : 5% of total RAM and idmapd/4
> > > allocates 292MB, which is FAULT:             : more than
> > > configured maximum FAULT:
> > > **********************************************************************
> > >
> > > !
> > > ! For more detais on this trigger click on link below:
> > > !
> > > http://10.85.1.216:2000/data/runners?selected_runner=memory-check !
> > >
> > > PRSTAT:
> > >    PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU
> > > PROCESS/NLWP 359 daemon    295M  292M sleep   59    0   0:01:02
> > > 0.0% idmapd/4 7412 root       36M   31M sleep   59    0   0:00:31
> > > 0.0% nmv.py/5 11772 root       33M   31M sleep   59    0
> > > 0:00:02 0.0% nmc/1 7550 root       33M   31M sleep   59    0
> > > 0:00:01 0.0% nmc/1 22489 root       32M   30M sleep   59    0
> > > 0:00:01 0.0% nmc/1 7387 root       58M   28M sleep   51    0
> > > 0:01:06 0.1% nms/1 29194 root       20M   18M sleep   59    0
> > > 0:00:00 0.0% hosts-check/1 22877 root       19M   17M sleep
> > > 59    0   0:00:00 0.0% nfs-collector/1 Total: 83 processes, 462
> > > lwps, load averages: 0.20, 0.18, 0.16
> 
> New description:
> 
>  We where running Leopard-2 with only 4GB of memory. This is about
> half what we will run typically. We then started to run an Exchange
> Load Generator to some zvols on Leopard-2. We got memory triggers
> that faulted, which we expected. However, someone noticed that idmapd
> appears to be taking to much memory when it shouldn't really be doing
> anything. Below is the comment and the memory email we received.
> 
>  -----Original Message-----
>  From: Andy Sharp
>  Sent: Thursday, January 29, 2009 10:40 AM
>  To: John Keiffer
>  Cc: dl-Leopard Core Team
>  Subject: Re: [NMS Report] NOTICE: host Leopard-2
> 
>  That's incredibly useful information.  It tells me that something is
>  quite out of sorts if idmapd is so huge.  In this case, idmapd
>  shouldn't even be doing anything since we're just supplying iSCSI
>  to the Windows machine; there's no id mapping going on.  Going to a
> 8GB machine might paper over the issue but we should try to get to the
>  bottom of it, or at least file a bug with Nexenta on it.
> 
>  Cheers,
> 
>  a
> 
>  On Thu, 29 Jan 2009 10:27:22 -0800 John Keiffer
>  <John.Keiffer@onstor.com> wrote:
> 
>  Example of auto-support email.
>  {{{
>  > -----Original Message-----
>  > From: myhost-noreply@Leopard-2 [mailto:myhost-noreply@Leopard-2]
>  > Sent: Tuesday, January 27, 2009 7:12 PM
>  > To: John Keiffer; Patrick Haverty
>  > Subject: [NMS Report] NOTICE: host Leopard-2
>  >
>  >
>  > FAULT:
>  > **********************************************************************
>  > FAULT: Appliance   : Leopard-2 (OS v1.1.3b104, NMS v1.1.3-1) FAULT:
>  > Machine SIG : G4A99KA8 FAULT: Primary MAC : 0:1e:c9:2e:18:e2
>  > FAULT: Time        : Tue Jan 27 19:12:14 2009
>  > FAULT: Trigger     : memory-check
>  > FAULT: Fault Type  : ALARM
>  > FAULT: Fault ID    : 3
>  > FAULT: Fault Count : 5
>  > FAULT: Severity    : NOTICE
>  > FAULT: Description : The appliance is low on free memory - remains
>  > less than FAULT:             : 5% of total RAM and idmapd/4
>  > allocates 292MB, which is FAULT:             : more than
>  > configured maximum FAULT:
>  > **********************************************************************
>  >
>  > !
>  > ! For more detais on this trigger click on link below:
>  > ! http://10.85.1.216:2000/data/runners?selected_runner=memory-check
>  > !
>  >
>  > PRSTAT:
>  >    PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU
>  > PROCESS/NLWP 359 daemon    295M  292M sleep   59    0   0:01:02
>  > 0.0% idmapd/4 7412 root       36M   31M sleep   59    0   0:00:31
>  > 0.0% nmv.py/5 11772 root       33M   31M sleep   59    0   0:00:02
>  > 0.0% nmc/1 7550 root       33M   31M sleep   59    0   0:00:01
>  > 0.0% nmc/1 22489 root       32M   30M sleep   59    0   0:00:01
>  > 0.0% nmc/1 7387 root       58M   28M sleep   51    0   0:01:06
>  > 0.1% nms/1 29194 root       20M   18M sleep   59    0   0:00:00
>  > 0.0% hosts-check/1 22877 root       19M   17M sleep   59    0
>  > 0:00:00 0.0% nfs-collector/1 Total: 83 processes, 462 lwps, load
>  > averages: 0.20, 0.18, 0.16
>  }}}
> 
