The qpopper list archive ending on 19 Feb 2004
Topics covered in this issue include:
1. Re: qpopper high load average
"Paul" <paul at kbs.net dot au>
Thu, 22 Jan 2004 10:51:39 +1100
2. Re: qpopper high load average
The Little Prince <thelittleprince at asteroid-b612 dot org>
Wed, 21 Jan 2004 15:55:13 -0800 (PST)
3. Re: qpopper high load average
Bart Dumon <bart at crossbar dot net>
Thu, 22 Jan 2004 11:58:28 +0100
4. Re: qpopper high load average
Bart Dumon <bart at crossbar dot net>
Thu, 22 Jan 2004 11:58:33 +0100
5. Re: latest gcc but all hosts = 0.0.0.0
Hugh Sasse Staff Elec Eng <hgs at dmu.ac dot uk>
Thu, 22 Jan 2004 11:42:53 +0000 (WET)
6. Re: qpopper high load average
Bart Dumon <bart at crossbar dot net>
Thu, 22 Jan 2004 11:58:21 +0100
7. Re: latest gcc but all hosts = 0.0.0.0 (fwd)
Hugh Sasse Staff Elec Eng <hgs at dmu.ac dot uk>
Thu, 22 Jan 2004 14:29:05 +0000 (WET)
8. --disable status, again
Jim Medley <jmedley at aesrg.tamu dot edu>
Thu, 22 Jan 2004 09:06:01 -0600
9. Re: qpopper high load average
Bart Dumon <bart at crossbar dot net>
Thu, 22 Jan 2004 18:24:23 +0100
10. Re: qpopper high load average
Chuck Yerkes <chuck+qpopper at yerkes dot com>
Thu, 22 Jan 2004 09:54:41 -0800
11. Re: qpopper high load average
The Little Prince <thelittleprince at asteroid-b612 dot org>
Thu, 22 Jan 2004 10:09:27 -0800 (PST)
12. Re: --disable status, again
The Little Prince <thelittleprince at asteroid-b612 dot org>
Thu, 22 Jan 2004 10:56:11 -0800 (PST)
13. Re: --disable status, again
The Little Prince <thelittleprince at asteroid-b612 dot org>
Thu, 22 Jan 2004 12:25:16 -0800 (PST)
14.
"Paul" <paul at kbs.net dot au>
15. Re: qpopper high load average
Bart Dumon <bart at crossbar dot net>
Fri, 23 Jan 2004 00:29:35 +0100
16. Re: qpopper - tcpdump port 110
"Eric" <nyre at kiercorp dot com>
Fri, 23 Jan 2004 07:36:22 -0700
17. virtualhosting with qpopper
Ken Anderson <ka at pacific dot net>
Fri, 23 Jan 2004 09:36:59 -0800
18. Re: virtualhosting with qpopper
The Little Prince <thelittleprince at asteroid-b612 dot org>
Fri, 23 Jan 2004 10:50:14 -0800 (PST)
19. Re: virtualhosting with qpopper
Alan Brown <alanb at digistar dot com>
Fri, 23 Jan 2004 14:00:39 -0500 (EST)
20. Re: virtualhosting with qpopper
Ken Anderson <ka at pacific dot net>
Fri, 23 Jan 2004 12:54:33 -0800
21. Re: virtualhosting with qpopper
The Little Prince <thelittleprince at asteroid-b612 dot org>
Fri, 23 Jan 2004 22:06:14 -0800 (PST)
22. Re: virtualhosting with qpopper
"Alan W. Rateliff, II" <lists at rateliff dot net>
Sat, 24 Jan 2004 08:18:26 -0500
23. Re: virtualhosting with qpopper
The Little Prince <thelittleprince at asteroid-b612 dot org>
Sat, 24 Jan 2004 09:54:40 -0800 (PST)
24. Re: virtualhosting with qpopper
Ken Anderson <ka at pacific dot net>
Sat, 24 Jan 2004 11:05:05 -0800
25. Re: Corrupted mail drop. [pop_dropcopy.c:863]
The Little Prince <thelittleprince at asteroid-b612 dot org>
Mon, 2 Feb 2004 14:12:14 -0800 (PST)
26. Re: Corrupted mail drop. [pop_dropcopy.c:863]
The Little Prince <thelittleprince at asteroid-b612 dot org>
Mon, 2 Feb 2004 13:48:29 -0800 (PST)
27. Is the list alive?
"Eric" <nyre at kiercorp dot com>
Mon, 2 Feb 2004 09:19:58 -0700
28. Qpopper patches
Joe Maimon <jmaimon at ttec dot com>
Mon, 02 Feb 2004 00:01:24 -0500
29. Re: Qpopper4.0.5 + PAM + Solaris8 (or Solaris 9) + LDAP
"kclo2000" <kclo2000 at netvigator dot com>
Wed, 4 Feb 2004 13:19:30 +0800
30. Re: Is the list alive?
george <gasjr4wd at mac dot com>
Mon, 09 Feb 2004 00:54:57 -0500
31. Re: Is the list alive?
Joseph S D Yao <jsdy at center.osis dot gov>
Mon, 9 Feb 2004 16:58:50 -0500
32. Re: Is the list alive?
"Derek C." <coffee at blarg dot net>
Mon, 09 Feb 2004 15:03:46 -0800
33. SQL and LDAP (Re: virtualhosting with qpopper)
Chuck Yerkes <chuck+qpopper at yerkes dot com>
Mon, 9 Feb 2004 14:11:28 -0500
34. Qpopper patches
Joe Maimon <jmaimon at ttec dot com>
Tue, 10 Feb 2004 14:03:40 -0500
35. bulletine help needed
"Muhammad Talha" <talha at worldcall.net dot pk>
Wed, 11 Feb 2004 19:24:12 +0400
36. Mail Transaction Failed
ghicks at cadence dot com
Thu, 12 Feb 2004 12:49:00 +0530
37. Re: SQL and LDAP (Re: virtualhosting with qpopper)
Harald Kapper <hk at kapper dot net>
Thu, 12 Feb 2004 12:35:50 +0100
38. Real strange situation with Qpopper
"Kevin M. Barrett" <kmb at kmb dot com>
Sat, 14 Feb 2004 09:54:27 -0500
39. Re: Real strange situation with Qpopper
Tim Villa <tvilla at cyllene.uwa.edu dot au>
Mon, 16 Feb 2004 07:37:33 +0800
40. Re: Real strange situation with Qpopper
"Kevin M. Barrett" <kmb at kmb dot com>
Sun, 15 Feb 2004 20:41:13 -0500
41. POPPER jamming
"Derek Conniffe" <derek at rivertower dot ie>
Mon, 16 Feb 2004 11:16:55 -0000
42. RE: POPPER jamming
"Derek Conniffe" <derek at rivertower dot ie>
Mon, 16 Feb 2004 14:23:11 -0000
43. RE: POPPER jamming
"Chris Payne" <cpayne at pr.uoguelph dot ca>
Mon, 16 Feb 2004 09:42:58 -0500
44. RE: POPPER jamming
Alan Brown <alanb at digistar dot com>
Mon, 16 Feb 2004 15:18:16 -0500 (EST)
45. RE: POPPER jamming
"Derek Conniffe" <derek at rivertower dot ie>
Mon, 16 Feb 2004 20:34:30 -0000
46. RE: POPPER jamming
Alan Brown <alanb at digistar dot com>
Mon, 16 Feb 2004 15:47:23 -0500 (EST)
47. RE: POPPER jamming
"Chris Payne" <cpayne at pr.uoguelph dot ca>
Mon, 16 Feb 2004 15:50:40 -0500
48. -ERR Unknown command: "g". ?
"Vsevolod (Simon) Ilyushchenko" <simonf at cshl dot edu>
Thu, 19 Feb 2004 10:43:03 -0500
49. Re: -ERR Unknown command: "g". ?
Tim Villa <tvilla at cyllene.uwa.edu dot au>
Fri, 20 Feb 2004 09:37:27 +0800
50. Re: -ERR Unknown command: "g". ?
Clifton Royston <cliftonr at lava dot net>
Thu, 19 Feb 2004 16:13:04 -1000
From: "Paul" <paul at kbs.net dot au>
Subject: Re: qpopper high load average
Date: Thu, 22 Jan 2004 10:51:39 +1100
HI Bart,
We run a mailserver for a 297,000 odd mailboxes. We normally during peak see
a load average of about 15, this is sustained and doesn't flucate that much.
The box is a dual 3.06ghz xeon with 3gb of DDR and 525GB of Ultra320 raid5
which is used for spool. On the server we have qpopper auth'ing via mysql
and exim as the local smtp server.
Our mail is stored in a double hash array like /var/spool/mail/e/b/ebadine
and we just use flatfile, instead of Maildir. The riad should be raid10 but
it's not always realistic, we use raid5 for example, its a bit of slow down
compared to raid10 but meets our needs
Maildir would be a potential slow down, especially over NFS. What speed
Mb/sec do you get over the NFS connection? I'm assuming its 100mbit full
duplex
When we originally had everyone in /var/spool/mail/$username our system
bogged down to an insane level, once we double hashed it, it was fantasic.
Do run any form of performance monitoring on the server? We use mrtg and
graph cpu, network, memory etc so we can easily spot bottlenecks.
----- Original Message -----
From: "Bart Dumon" <bart at crossbar dot net>
To: "Paul" <paul at kbs.net dot au>
Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
Sent: Thursday, January 22, 2004 8:41 AM
Subject: Re: qpopper high load average
>
> On Wed, Jan 21, 2004 at 11:24:12AM +1100, Paul wrote:
> > How many connections/sec for the server during peak? What does the load
avg
> > get to? What storage is it (the mail spool)? What filesystem?
>
> during peaks we get about 15 connections/sec per server, the load
> gets up to 800 if we do not interfere. once the popper gets
> restarted the load will decrease. the mail spool is kept on a NAS
> and is accessed using nfs.
>
>
> bart
> --
>
Date: Wed, 21 Jan 2004 15:55:13 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: qpopper high load average
On Wed, 21 Jan 2004, Bart Dumon wrote:
> Tony,
>
> i'm not saying it's directly related to the maildir patch, i've had
> very good performance just untill 2 weeks ago. the amount of active
> users has increased a little, but the load increased dramatically out
> of the blue.
>
Forgive me bart, it just totally went over my head that you and I had
talked a while ago when you had the rename() problem.
When you said NAS and nfs, the little light bulb went on above my head
:-)
Those context switch/cs numbers under the vmstat seem awfully high to me.
I wish i had something good to tell you.
How many boxes you got now? didn't you have like 4 last time we talked
with 300k users?
Do any SW upgrades between the time it was going fine, and it started
crawling? That's probably a stupid question.
Might as well send the strace to me, and i'll check it out..not sure what
else to tell you.
btw, did i ever send you anything regarding that unpopable zero-filesize
problem? I don't remember. If not, i'm sorry, it must have slipped my mind
and i'll revisit it.
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> i'm really not able to test other cases as far as mailbox format and
> authentication is concerned. and yes this is very frustrating for me
> too. the amount of mailboxes is just too big to just switch them
> to local auth and/or mbox format. we've had a test setup but we're
> unable to get to this kind of behavior even when subject to very high
> amount of connections. we seem only able to get this problem in the
> production environment. i've already thrown in 2 extra boxes (temporary)
> which are also handling pop sessions, (this buys us some time, but the
> real problem is certainly not gone). this also gives us the oppurtunity to
> test some stuff and upgrade some stuff to see if it affects the problem.
>
> now, during peaks, i often see processes waiting to run:
>
> 040121 15:16:44 procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> 040121 15:16:44 r b swpd free buff cache si so bi bo in cs us sy id wa
> 040121 15:16:44 1 18 1792 10608 9620 2002316 0 0 64 401 2605 8911 6 10 85 0
> 040121 15:16:49 27 15 1792 13892 9608 1998868 0 0 30 890 6057 22022 12 26 62 0
> 040121 15:16:54 53 9 1792 15940 9624 2001352 0 0 84 187 5711 30160 11 31 59 0
> 040121 15:16:59 34 10 1792 15604 9668 2014532 0 0 25 326 5919 26858 9 22 68 0
> 040121 15:17:04 0 17 1792 10544 9728 2016196 0 0 42 371 5158 15635 9 12 78 0
> 040121 15:17:09 1 15 1792 10488 9624 2005556 0 0 42 364 6380 23565 12 24 64 0
> 040121 15:17:14 1 21 1792 18812 9620 2007196 0 0 22 816 6423 24094 13 25 62 0
> 040121 15:17:19 0 22 1792 11908 9640 2014016 0 0 106 482 6697 27224 11 25 64 0
> 040121 15:17:24 50 13 1792 15136 9648 2014668 0 0 63 179 4901 23929 7 23 69 0
> 040121 15:17:29 1 17 1792 10652 9692 2019968 0 0 35 186 4472 20241 6 16 77 0
> 040121 15:17:34 0 18 1792 12152 9716 2012088 0 0 58 221 5410 27104 9 24 67 0
>
> the mailspool is located on a NAS, so it's accesed using nfs, however
> the NAS seems to be doing fine, no performance issues to be found there
> untill now, but we're checking this ofcourse.
>
> the problem with running a popper in strace that it outputs an awfull lot of
> data and usually the load doesn't start increasing right away. i did see
> a bunch (1021) of fd errors only at startup like this:
>
> 15:54:41.272417 open("/dev/null", O_RDWR|O_CREAT|O_TRUNC, 0666) = 3
> 15:54:41.272589 fork() = 25607
> [pid 25607] 15:54:41.272788 setsid( <unfinished ...>
> [pid 25605] 15:54:41.272876 semget(IPC_PRIVATE, 0, 0x1|0 <unfinished ...>
> [pid 25607] 15:54:41.272924 <... setsid resumed> ) = 25607
> [pid 25605] 15:54:41.272952 <... semget resumed> ) = -1 ENOSYS (Function not implemented)
> [pid 25607] 15:54:41.272989 fork( <unfinished ...>
> [pid 25605] 15:54:41.273028 _exit(0) = ?
> [pid 25607] 15:54:41.273101 <... fork resumed> ) = 25608
> [pid 25608] 15:54:41.273314 chdir("/") = 0
> [pid 25607] 15:54:41.273428 semget(IPC_PRIVATE, 0, 0x1|0 <unfinished ...>
> [pid 25608] 15:54:41.273479 getrlimit(0x7, 0xbffff7f8 <unfinished ...>
> [pid 25607] 15:54:41.273518 <... semget resumed> ) = -1 ENOSYS (Function not implemented)
> [pid 25608] 15:54:41.273546 <... getrlimit resumed> ) = 0
> [pid 25607] 15:54:41.273579 _exit(0) = ?
> 15:54:41.273610 close(1024) = -1 EBADF (Bad file descriptor)
> 15:54:41.273736 close(1023) = -1 EBADF (Bad file descriptor)
> 15:54:41.273828 close(1022) = -1 EBADF (Bad file descriptor)
> 15:54:41.273901 close(1021) = -1 EBADF (Bad file descriptor)
> etc...
>
> i have an strace output generated during the peaks, if you're interested i
> can mail it to you (27mb) or a portion of it.
>
> i've even tried a 2.6.1 kernel to see if it had any effect, but it didn't,
> untill i did see that a debian3 was having the same problem but less
> intensive then a slack9. the major difference between those distro's is
> the gcc version 2.95.x <> 3.2.2. compiling qpopper with gcc 2.95.x did not
> have any effect on the slack boxes.
>
>
> bart
>
> On Tue, Jan 20, 2004 at 03:45:49PM -0800, The Little Prince wrote:
> > I have heard of any performance problems with my patch. People have
> > reported really good perf. with thousands of users.
> > Nobody has reported anything with radius auth. used at the same time.
> > Not being able to test any other cases, e.g. local auth. and maildir,
> > radius, and mbox, etc.. doesn't help you.
> > Like Clifton said, check your stats. Watch vmstat statistics.
> > Even strace some of the processes to see what calls they spend the most
> > time in.
> >
> > --Tony
> > .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> > Anthony J. Biacco Network Administrator/Engineer
> > thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
> >
> > "You find magic from your god, and I find magic everywhere"
> > .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >
> > On Tue, 20 Jan 2004, Clifton Royston wrote:
> >
> > > On Mon, Jan 19, 2004 at 03:47:38PM +0100, Bart Dumon wrote:
> > > > i'm running qpopper 4.0.5 on linux (2.4.x) with maildir patch
> > > > (0.12) and pam_radius for authentication.
> > > >
> > > > right now, i'm suffering from high cpu load averages once it's
> > > > gets too busy the load will skyrocket to abnormal high values
> > > > and the service will become unavailable untill it's restarted.
> > > > this typically happens during peak times when we receive 15 pop
> > > > sessions/sec.
> > > >
> > > > at first it thought it was radius related because i'm seeing the
> > > > following error message during the peak times:
> > > >
> > > > Jan 19 14:07:41 xxx popper[13404]: pam_radius_auth: RADIUS server x.x.x.x failed to respond
> > > >
> > > > but even with a more performant radius, the problem persists, it
> > > > looks like the radius errors are a consequence of the problem and
> > > > not the real cause.
> > > > everything is pointing in the direction of the amount of pop sessions
> > > > whenever you get to the 13-14pops/sec barrier, qpopper seems to
> > > > be giving up. it's not traffic related because the amount of traffic
> > > > is higher outside the peak hours.
> > >
> > > Usually this kind of overload is due to many users having large
> > > mailboxes (e.g. 30MB and up) in the old UNIX mbox format. In this
> > > format, the file needs to be recopied to update the messages' status
> > > when popped, which results in the POP sessions completely saturating
> > > your disk I/O bandwidth.
> > >
> > > I have also seen some Radius daemons show a tendency to die under
> > > this type of heavy load.
> > >
> > > I haven't seen reports of this with maildir format. However, what
> > > you're describing is consistent with I/O bandwidth saturation.
> > >
> > > If you are saturating your disk bandwidth, you'll see a large number
> > > of concurrent tasks waiting to run ("load" as shown by the uptime
> > > command or xload) but a high proportion of idle time shown by vmstat.
> > > At that point you'll need to try to figure out why all this bandwidth
> > > is still going on even with maildir format; I don't use that patch, so
> > > I can't help with troubleshooting it.
> > > -- Clifton
>
>
--
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Date: Thu, 22 Jan 2004 11:58:28 +0100
From: Bart Dumon <bart at crossbar dot net>
Subject: Re: qpopper high load average
Hi Paul,
We have 320k mailboxes, 4 dual xeon mailservers. the setup was
highly influenced by the requirement of redundancy, therefor we're
using multiple machines with a spool on NFS. at least 2 boxes
should be able to handle all the traffic, so i'm quite certain
there is an underlying problem which affects our performance.
The NFS connection is 100mbit FD for each box, the NAS itself
is 1Gbit FD. during business hours one box can easily get
to 30-40mbit/sec. resulting in 100-120mbit/sec of throughput
on the NAS.
You're considering maildir/nfs to be a potential slow down,
but i don't see it that way because disk I/O is usually the
bottleneck on smtp/pop servers. both ways have their advantages
and disadvantages.
Anyway, you seem to be doing pretty well with your single system,
but do you have any idea of the average size of the mailboxes
on your system and the percentage of active users?
bart
On Thu, Jan 22, 2004 at 10:51:39AM +1100, Paul wrote:
> HI Bart,
>
> We run a mailserver for a 297,000 odd mailboxes. We normally during peak see
> a load average of about 15, this is sustained and doesn't flucate that much.
> The box is a dual 3.06ghz xeon with 3gb of DDR and 525GB of Ultra320 raid5
> which is used for spool. On the server we have qpopper auth'ing via mysql
> and exim as the local smtp server.
> Our mail is stored in a double hash array like /var/spool/mail/e/b/ebadine
> and we just use flatfile, instead of Maildir. The riad should be raid10 but
> it's not always realistic, we use raid5 for example, its a bit of slow down
> compared to raid10 but meets our needs
>
> Maildir would be a potential slow down, especially over NFS. What speed
> Mb/sec do you get over the NFS connection? I'm assuming its 100mbit full
> duplex
> When we originally had everyone in /var/spool/mail/$username our system
> bogged down to an insane level, once we double hashed it, it was fantasic.
>
> Do run any form of performance monitoring on the server? We use mrtg and
> graph cpu, network, memory etc so we can easily spot bottlenecks.
>
> ----- Original Message -----
> From: "Bart Dumon" <bart at crossbar dot net>
> To: "Paul" <paul at kbs.net dot au>
> Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
> Sent: Thursday, January 22, 2004 8:41 AM
> Subject: Re: qpopper high load average
>
>
> >
> > On Wed, Jan 21, 2004 at 11:24:12AM +1100, Paul wrote:
> > > How many connections/sec for the server during peak? What does the load
> avg
> > > get to? What storage is it (the mail spool)? What filesystem?
> >
> > during peaks we get about 15 connections/sec per server, the load
> > gets up to 800 if we do not interfere. once the popper gets
> > restarted the load will decrease. the mail spool is kept on a NAS
> > and is accessed using nfs.
> >
> >
> > bart
> > --
> >
Date: Thu, 22 Jan 2004 11:58:33 +0100
From: Bart Dumon <bart at crossbar dot net>
Subject: Re: qpopper high load average
Tony,
On Wed, Jan 21, 2004 at 03:55:13PM -0800, The Little Prince wrote:
> >
> > i'm not saying it's directly related to the maildir patch, i've had
> > very good performance just untill 2 weeks ago. the amount of active
> > users has increased a little, but the load increased dramatically out
> > of the blue.
> >
>
> Forgive me bart, it just totally went over my head that you and I had
> talked a while ago when you had the rename() problem.
> When you said NAS and nfs, the little light bulb went on above my head
> :-)
don't worry about it :)
> Those context switch/cs numbers under the vmstat seem awfully high to me.
> I wish i had something good to tell you.
> How many boxes you got now? didn't you have like 4 last time we talked
> with 300k users?
yes, 320k mailboxes, 4 servers.
> Do any SW upgrades between the time it was going fine, and it started
> crawling? That's probably a stupid question.
nah, we didn't do any upgrades, the only difference we see is an increasing
activity, but still, we should be able to handle it.
> Might as well send the strace to me, and i'll check it out..not sure what
> else to tell you.
k, i'll send it to you.
> btw, did i ever send you anything regarding that unpopable zero-filesize
> problem? I don't remember. If not, i'm sorry, it must have slipped my mind
> and i'll revisit it.
nope, i thought you were on vacation or something like that :) so i thought
of giving the mailinglist a shot.
bart
--
Date: Thu, 22 Jan 2004 11:42:53 +0000 (WET)
From: Hugh Sasse Staff Elec Eng <hgs at dmu.ac dot uk>
Subject: Re: latest gcc but all hosts = 0.0.0.0
On Thu, 15 Jan 2004, I wrote:
> I have obtained the latest qpopper (4.0.5) and built it
> successfully on a Solaris 9 system. All the hosts show up as
> unverfiable 0.0.0.0. So I read the FAQ and this tells me:
[...]
Having had no reply to this I decided to see if this was something I
could patch myself.
My logs look like
Jan 22 11:20:31 brains qpopper[24014]: [ID 702911 mail.notice]
(v4.0.5) POP login by user "<elided>" at (0.0.0.0) 0.0.0.0
Jan 22 11:20:31 brains qpopper[24014]: [ID 702911 mail.notice]
Stats: <elided> 0 0 1555 33963919 0.0.0.0 0.0.0.0
Jan 22 11:20:56 brains qpopper[24070]: [ID 702911 mail.debug]
(v4.0.5) Unable to get canonical name of client 0.0.0.0: Authoritive
answer: Host not found (1)
Examining the source of popper/pop_init.c I see that the canonical
name comes from gethostbyaddr, which nees -lnsl, which it has (it is
already in CFLAGS in my popper/Makefile)..
gethostbyaddr is called with an address p-ipaddr, which comes from
1072 p->ipaddr = (char *) strdup ( inet_ntoa ( cs.sin_addr ) );
And cs gets set at:
1060 len = sizeof(cs);
1061 if ( getpeername ( sp, (struct sockaddr *) &cs, &len ) < 0 ) {
1062 pop_log ( p, POP_PRIORITY, HERE,
1063 "Unable to obtain socket and address of client: %s (%d )",
1064 STRERROR(errno), errno );
1065 EXIT ( 1 );
1066 }
and I don't get "Unable to obtain socket and address of client" in my logs,
so cs ought to be fairly sensible..
cs is a struct sockaddr_in, so its size is known, but getpeername says
[...] The int pointed to by the namelen parameter should
be initialized to indicate the amount of space pointed to by
name. On return it contains the actual size of the name
returned (in bytes), prior to any truncation. The name is
truncated if the buffer provided is too small.
The value of len is not checked before p->ipaddr is assigned.
Thia may have no bearing on my problem as such, but it seems worth
poinitg out.
Hugh
Date: Thu, 22 Jan 2004 11:58:21 +0100
From: Bart Dumon <bart at crossbar dot net>
Subject: Re: qpopper high load average
David,
On Wed, Jan 21, 2004 at 05:09:31PM -0600, David Champion wrote:
> * On 2004.01.20, in <215446068092100234368 at lists.pensive dot org>,
> * "Clifton Royston" <cliftonr at lava dot net> wrote:
> >
> > Usually this kind of overload is due to many users having large
> > mailboxes (e.g. 30MB and up) in the old UNIX mbox format. In this
>
> I've been putting off a reply to see what others say, but I might
> as well go ahead with mentioning that if it does seem to be a high
> user-load problem, or if that seems like a good-enough temporary
> solution, you might want to take a look at how we reduced that at my
> site:
>
> http://home.uchicago.edu/~dgc/sw/qpopper/index.html
thanks for your feedback, we've been thinking about implemting this
earlier so i will probably give it a try and test it. but i do know
already i'll have at least 2 issues.
1. we're using multiple servers (4), users which are in the listed
in the reserved memory segment on machine A will still be able to
pop on machine B. although i can think of a way to avoid this,
i can use persistency in the loadbalancing to make sure a user
is always directed to the same popserver within a certain timeframe.
but this also reduces the load balancing effect and redundancy, but
it might not be too bad.
2. happymail can add additional wait time based on the size of the
mailbox. i can see how this can be done with a standard qpopper
because all mail is in one file so stat'ing the mbox file will do
the trick. but when using maildir, every message is a file and i
don't think qpopper will be returning the total size of the mailbox
but i don't know this for sure. perhaps TLP will be able to answer
this right away.
bart
--
Date: Thu, 22 Jan 2004 14:29:05 +0000 (WET)
From: Hugh Sasse Staff Elec Eng <hgs at dmu.ac dot uk>
Subject: Re: latest gcc but all hosts = 0.0.0.0 (fwd)
Thanks to Werner Mohras <mohren at mpi-seewiesen.mpg dot de>
who wrote that my qpopper entry in inetd.conf had tcp6
instead of tcp. This was indeed true. [tcp6 means that the daemon
in question supports IPV6.]
How do I get the FAQ modified so that other people know to check
this? I expect others will modify existing inetd.conf files and
replicate my mistake.
The only othere outstanding issue is that len is not checked at
all. Even if no action is taken as a result I think the truncation
should be logged.
Maybe something like this untested:
----BEGINNING OF PATCH------
--- pop_init.c 2003-03-13 02:06:37.000000000 +0000
+++ pop_init_new.c 2004-01-22 14:24:17.968954000 +0000
@@ -445,6 +445,7 @@
int errflag = 0;
int c;
int len;
+ int olen; /* original value of len */
extern char * optarg;
int options = 0;
int sp = 0; /* Socket pointer */
@@ -1057,7 +1058,7 @@
/*
* Get the address and socket of the client to whom I am speaking
*/
- len = sizeof(cs);
+ olen = len = sizeof(cs);
if ( getpeername ( sp, (struct sockaddr *) &cs, &len ) < 0 ) {
pop_log ( p, POP_PRIORITY, HERE,
"Unable to obtain socket and address of client: %s (%d)",
@@ -1066,6 +1067,14 @@
}
/*
+ * Warn if the name has been truncated
+ */
+ if ( len > olen ) {
+ pop_log ( p, POP_PRIORITY, HERE,
+ "Name of client truncated by getpeername()");
+ }
+
+ /*
* Save the dotted decimal form of the client's IP address
* in the POP parameter block
*/
----END OF PATCH------
perhaps?
Thank you,
Hugh
Date: Thu, 22 Jan 2004 09:06:01 -0600
From: Jim Medley <jmedley at aesrg.tamu dot edu>
Subject: --disable status, again
Hi All: I thought I would try again to see if anyone might be able to
help with this problem. Originally ....
> I have a problem when downloading my email to two different
> computers. The first computer to download will show the email as
> unread, while the second shows the same email as read. This is a real
> problem when someone sends an email and their clock is off. Our mail
> server is sendmail / qpopper on a Mac OS 10.2.8. I have tried to
> re-configure qpopper with the --disable-status flag but it did not
> solve the problem. Any suggestions?
I am presently looking at the file 'qpopper4.0.5/popper/pop_stat.c'.
I believe this is where I might make the correction, but have no idea
what to do. Any thoughts other than changing to imap? Thanks, Jim
--
James Medley
Senior Research Associate
Texas A&M University
Agricultural Research and Extension Center
Beaumont, Texas 77713
(409)752-2741 ext 2252
jmedley at aesrg.tamu dot edu
Date: Thu, 22 Jan 2004 18:24:23 +0100
From: Bart Dumon <bart at crossbar dot net>
Subject: Re: qpopper high load average
David,
On Thu, Jan 22, 2004 at 11:58:21AM +0100, Bart Dumon wrote:
> >
> > http://home.uchicago.edu/~dgc/sw/qpopper/index.html
>
> thanks for your feedback, we've been thinking about implemting this
> earlier so i will probably give it a try and test it. but i do know
> already i'll have at least 2 issues.
>
i've been able to compile qpopper with maildir and happymail, to make
the patching successfull i had to add 2 newlines to pop_config.c after
applying the happymail patch:
--- qpopper4.0.5/popper/pop_config.c 2004-01-22 17:27:48.000000000 +0100
+++ qpopper4.0.5-fixed-happymail/popper/pop_config.c 2004-01-22 17:11:44.000000000 +0100
@@ -231,0 +232 @@
+
@@ -926,0 +928 @@
+
now, qpopper has happymail support:
X-HAPPYMAIL pl8 bufsiz:0 base:0 rate:0s/1b free:-1b max:0s
but unfortunately it doesn't play well with maildir, because every mailbox
i try to pop is always empty :)
bart
--
Date: Thu, 22 Jan 2004 09:54:41 -0800
From: Chuck Yerkes <chuck+qpopper at yerkes dot com>
Subject: Re: qpopper high load average
Quoting Bart Dumon (bart at crossbar dot net):
> On Wed, Jan 21, 2004 at 03:55:13PM -0800, The Little Prince wrote:
...
> > Forgive me bart, it just totally went over my head that you and I had
> > talked a while ago when you had the rename() problem.
> > When you said NAS and nfs, the little light bulb went on above my head
> > :-)
...
> yes, 320k mailboxes, 4 servers.
More important numbers include concurrent access. In dialup
days, the good number for ISPs was 1% would be hitting pop mail.
(10% connected, 10% of those hitting mail).
Used here, that says 3200 concurrent users, which I'd have no qualms
with on any single unix box.
Recall two that an NFS link appears in load average, because of
where NFS lives in the kernel. I've had a slowing NFS server bump
fairly responsive machines to loads in the 100's. (I did get,
after 5 minutes, an LA of 2050 once - as a large server was
just tanking. Machine was not responsive :)
I've worked in environments where 320k wasn't a great challenge,
but it's a different world that 1 box with lots of *good* RAID.
Doing the Earthlink type model, we used mbox, but we did LOTS
to deal with NFS locking - changes to mail.local and qpopper.
NFS (generically NAS) does mean MUCH slower disk access. My favored
RAID boxes let me write to them at 70MB/s per box (I've put in
multiple controllers and used software striping across RAID boxes for
more).
A good 1GB connection with decent (not windows) drivers will give
you around 300Mb/s at best. (ttcp on a quiescent two machine network
may give better, yes, but not on a real network).
For practical math, we can call that 30MB/s or less.
Plus delays for the server to get and write files. (In the past,
we've used only 4-8GB of Netapp disks because the bottleneck was
spindles, and using 36GB disks just exacerbated things).
So it's doable, but LA is a finger in the wind - you won't
know about the weather much.
nfsstats and looking on the NFS server will tell you lots more...
Another approach, favored for IMAP servers that should't use NFS,
is a proxy that takes your POP session, authenticates you, looks
up your name in some directory (LDAP is good) and then connects
you to "your" pop server (authenticating you if needed) and becoming
a passthrough.
A well written proxy client, threading not forking, can handle HUGE
numbers of connections with little strain. In fact, the machines
can be fairly light weight. You need a tcp stack, not a CPU. A
P500 does fine.
Additionally, the TCP connections to the pop servers are all on
fast and local links. You can safely reduce the timeouts for TCP
sockets to WAIT on the backend servers. Also, TCP windows will be
fairly large - the backends will never ever deal with a user on a
poor, slow or modem link.
The TIS Gauntlet firewall had a pop proxy - based on plug-gw that
could have been modified for this. But the license disallows it.
And you'd want lots of threading.
Date: Thu, 22 Jan 2004 10:09:27 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: qpopper high load average
On Thu, 22 Jan 2004, Bart Dumon wrote:
>
> > btw, did i ever send you anything regarding that unpopable zero-filesize
> > problem? I don't remember. If not, i'm sorry, it must have slipped my mind
> > and i'll revisit it.
>
> nope, i thought you were on vacation or something like that :) so i thought
> of giving the mailinglist a shot.
>
ok, sorry, i got a fix for it now, along with a patch to check and set
status flags in the maildir filename (lets the LAST command work with
Maildir). Someone mentioned that some clients don't use UIDLs for status
(e.g. yahoo's remote pop3 service). Don't know if that affects you at all.
I'll send you a test patch, if you'd like to try it out for yourself.
Let me know.
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Date: Thu, 22 Jan 2004 10:56:11 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: --disable status, again
On Thu, 22 Jan 2004, Jim Medley wrote:
>
> I am presently looking at the file 'qpopper4.0.5/popper/pop_stat.c'.
> I believe this is where I might make the correction, but have no idea
> what to do. Any thoughts other than changing to imap? Thanks, Jim
>
what kind of mail clients?
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Date: Thu, 22 Jan 2004 12:25:16 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: --disable status, again
i think i see.
Go in: pop_updt.c
Search for: status_written++
2 lines above it you'll see:
if ( mp->retr_flag )
change it to:
if ( mp->retr_flag && p->bUpdate_status_hdrs )
Recompile, reinstall, blah blah.
Developers..overlooked bug? Can we really assume that we should update the
Status: header just because it exists, even if the admin is telling us not
to?
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
On Thu, 22 Jan 2004, Jim Medley wrote:
> Tony: Below are copies of a test message sent to me after configuring
> with --disable-status. The first is the one downloaded first which
> shows as unread and the second showed as read. The same message was
> downloaded using Eudora on a Mac and PC.....
> #1...
> >Return-Path: <mcfernandez at aesrg.tamu dot edu>
> >Received: from [165.95.60.251] (mcfernandez.tamu.edu [165.95.60.251])
> > by aesrg.tamu.edu (8.12.9/8.12.9) with ESMTP id i0MK29lP000443
> > for <jmedley at aesrg.tamu dot edu>; Thu, 22 Jan 2004 14:02:10 -0600 (CST)
> >Mime-Version: 1.0
> >Message-Id: <a06020401bc35dcd020e0 at [165.95.60 dot 251]>
> >Date: Thu, 22 Jan 2004 14:02:40 -0600
> >To: Jim Medley <jmedley at aesrg.tamu dot edu>
> >From: Christina Fernandez <mcfernandez at aesrg.tamu dot edu>
> >Subject: test
> >Content-Type: text/plain; charset="us-ascii" ; format="flowed"
> >Status:
>
> #2....
> >Return-Path: <mcfernandez at aesrg.tamu dot edu>
> >Received: from [165.95.60.251] (mcfernandez.tamu.edu [165.95.60.251])
> > by aesrg.tamu.edu (8.12.9/8.12.9) with ESMTP id i0MK29lP000443
> > for <jmedley at aesrg.tamu dot edu>; Thu, 22 Jan 2004 14:02:10 -0600 (CST)
> >Mime-Version: 1.0
> >Message-Id: <a06020401bc35dcd020e0 at [165.95.60 dot 251]>
> >Date: Thu, 22 Jan 2004 14:02:40 -0600
> >To: Jim Medley <jmedley at aesrg.tamu dot edu>
> >From: Christina Fernandez <mcfernandez at aesrg.tamu dot edu>
> >Subject: test
> >Content-Type: text/plain; charset="us-ascii" ; format="flowed"
> >Status: RO
> Thanks, Jim
>
>
> >On Thu, 22 Jan 2004, Jim Medley wrote:
> >
> >> Most mail clients are Eudora 6.0 on Macs, but it also happens with
> >> Outlook on Win 2000. Jim
> >>
> >
> >if those clients don't check the Status: or X-UIDL headers, to see if a
> >message is read, but check UIDLs and/or the LAST command instead, then, i
> >don't think --disable-status will do any good. Because it just supresses
> >those headers.
> >I think Eudora DID use Status: though, i'm not sure.
> >Do you see either of those headers in the mail in your mail client?
> >Realize, any messages you are trying to retrieve, that were intially
> >retrieved BEFORE you turned on --disable-status will still have the
> >Status: or X-UIDL headers in them, unless you go and remove them manually.
> >Messages after you turned on --disable-status should not, and you should
> >check those headers in your client to make sure they're not in there.
> >
> >--Tony
> >.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >Anthony J. Biacco Network Administrator/Engineer
> >thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
> >
> > "You find magic from your god, and I find magic everywhere"
> >.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >
> >> >On Thu, 22 Jan 2004, Jim Medley wrote:
> >> >
> >> >>
> >> >> I am presently looking at the file 'qpopper4.0.5/popper/pop_stat.c'.
> >> >> I believe this is where I might make the correction, but have no idea
> >> >> what to do. Any thoughts other than changing to imap? Thanks, Jim
> >> >>
> >> >
> >> >what kind of mail clients?
> >> >
> >> >--Tony
> >> >.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >> >Anthony J. Biacco Network Administrator/Engineer
> >> >thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
> >> >
> >> > "You find magic from your god, and I find magic everywhere"
> >> >.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >>
> >>
> >>
> >
> >--
> >.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >Anthony J. Biacco Network Administrator/Engineer
> >thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
> >
> > "You find magic from your god, and I find magic everywhere"
> >.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
>
>
>
--
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
From: "Paul" <paul at kbs.net dot au>
76490734290627 at lists.pensive.org> <20040122105828.GB17393 at zeroth dot crossbar dot net>
Subject: Re: qpopper high load average
Date: Fri, 23 Jan 2004 10:15:20 +1100
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2800.1158
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165
Hi Bart,
Average size of our mailboxes is around 800kb per mailbox. of the 297,000
about 250,000 of them are active and live. 50,000 of them are
inactive/suspended holding up 55GB between them.
Our total spool is currently 240GB including the 55GB for the
suspend/inactive users. We just haven't deleted their old spool data yet.
What do you use for your SMTP server?
----- Original Message -----
From: "Bart Dumon" <bart at crossbar dot net>
To: "Paul" <paul at kbs.net dot au>
Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
Sent: Thursday, January 22, 2004 9:58 PM
Subject: Re: qpopper high load average
> Hi Paul,
>
> We have 320k mailboxes, 4 dual xeon mailservers. the setup was
> highly influenced by the requirement of redundancy, therefor we're
> using multiple machines with a spool on NFS. at least 2 boxes
> should be able to handle all the traffic, so i'm quite certain
> there is an underlying problem which affects our performance.
>
> The NFS connection is 100mbit FD for each box, the NAS itself
> is 1Gbit FD. during business hours one box can easily get
> to 30-40mbit/sec. resulting in 100-120mbit/sec of throughput
> on the NAS.
> You're considering maildir/nfs to be a potential slow down,
> but i don't see it that way because disk I/O is usually the
> bottleneck on smtp/pop servers. both ways have their advantages
> and disadvantages.
>
> Anyway, you seem to be doing pretty well with your single system,
> but do you have any idea of the average size of the mailboxes
> on your system and the percentage of active users?
>
>
> bart
>
> On Thu, Jan 22, 2004 at 10:51:39AM +1100, Paul wrote:
> > HI Bart,
> >
> > We run a mailserver for a 297,000 odd mailboxes. We normally during peak
see
> > a load average of about 15, this is sustained and doesn't flucate that
much.
> > The box is a dual 3.06ghz xeon with 3gb of DDR and 525GB of Ultra320
raid5
> > which is used for spool. On the server we have qpopper auth'ing via
mysql
> > and exim as the local smtp server.
> > Our mail is stored in a double hash array like
/var/spool/mail/e/b/ebadine
> > and we just use flatfile, instead of Maildir. The riad should be raid10
but
> > it's not always realistic, we use raid5 for example, its a bit of slow
down
> > compared to raid10 but meets our needs
> >
> > Maildir would be a potential slow down, especially over NFS. What speed
> > Mb/sec do you get over the NFS connection? I'm assuming its 100mbit full
> > duplex
> > When we originally had everyone in /var/spool/mail/$username our system
> > bogged down to an insane level, once we double hashed it, it was
fantasic.
> >
> > Do run any form of performance monitoring on the server? We use mrtg and
> > graph cpu, network, memory etc so we can easily spot bottlenecks.
> >
> > ----- Original Message -----
> > From: "Bart Dumon" <bart at crossbar dot net>
> > To: "Paul" <paul at kbs.net dot au>
> > Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
> > Sent: Thursday, January 22, 2004 8:41 AM
> > Subject: Re: qpopper high load average
> >
> >
> > >
> > > On Wed, Jan 21, 2004 at 11:24:12AM +1100, Paul wrote:
> > > > How many connections/sec for the server during peak? What does the
load
> > avg
> > > > get to? What storage is it (the mail spool)? What filesystem?
> > >
> > > during peaks we get about 15 connections/sec per server, the load
> > > gets up to 800 if we do not interfere. once the popper gets
> > > restarted the load will decrease. the mail spool is kept on a NAS
> > > and is accessed using nfs.
> > >
> > >
> > > bart
> > > --
> > >
>
>
Date: Fri, 23 Jan 2004 00:29:35 +0100
From: Bart Dumon <bart at crossbar dot net>
Subject: Re: qpopper high load average
76490734290627 at lists.pensive.org> <20040122105828.GB17393 at zeroth dot crossbar dot net> <http://zeroth.crossbar.net/~bartdu/gpg.pubkey.asc
X-PGP-Key: 07F114B4
User-Agent: Mutt/1.5.4i
Paul,
i do find an average of 800kb/mailbox relatively small, we have an
average of 3,6mb/mailbox. obviously making a temp copy of an
mbox file every time a user pops would cause a lot of disk I/O,
in our case (60*3,6mb)/sec copied to a temp location and that's
exactly why i think maildir is not being a slow down in our case.
and we're using Sendmail.. (expecting a lot of comments now :)
bart
On Fri, Jan 23, 2004 at 10:15:20AM +1100, Paul wrote:
>
> Hi Bart,
>
> Average size of our mailboxes is around 800kb per mailbox. of the 297,000
> about 250,000 of them are active and live. 50,000 of them are
> inactive/suspended holding up 55GB between them.
> Our total spool is currently 240GB including the 55GB for the
> suspend/inactive users. We just haven't deleted their old spool data yet.
> What do you use for your SMTP server?
>
> ----- Original Message -----
> From: "Bart Dumon" <bart at crossbar dot net>
> To: "Paul" <paul at kbs.net dot au>
> Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
> Sent: Thursday, January 22, 2004 9:58 PM
> Subject: Re: qpopper high load average
>
>
> > Hi Paul,
> >
> > We have 320k mailboxes, 4 dual xeon mailservers. the setup was
> > highly influenced by the requirement of redundancy, therefor we're
> > using multiple machines with a spool on NFS. at least 2 boxes
> > should be able to handle all the traffic, so i'm quite certain
> > there is an underlying problem which affects our performance.
> >
> > The NFS connection is 100mbit FD for each box, the NAS itself
> > is 1Gbit FD. during business hours one box can easily get
> > to 30-40mbit/sec. resulting in 100-120mbit/sec of throughput
> > on the NAS.
> > You're considering maildir/nfs to be a potential slow down,
> > but i don't see it that way because disk I/O is usually the
> > bottleneck on smtp/pop servers. both ways have their advantages
> > and disadvantages.
> >
> > Anyway, you seem to be doing pretty well with your single system,
> > but do you have any idea of the average size of the mailboxes
> > on your system and the percentage of active users?
> >
> >
> > bart
> >
> > On Thu, Jan 22, 2004 at 10:51:39AM +1100, Paul wrote:
> > > HI Bart,
> > >
> > > We run a mailserver for a 297,000 odd mailboxes. We normally during peak
> see
> > > a load average of about 15, this is sustained and doesn't flucate that
> much.
> > > The box is a dual 3.06ghz xeon with 3gb of DDR and 525GB of Ultra320
> raid5
> > > which is used for spool. On the server we have qpopper auth'ing via
> mysql
> > > and exim as the local smtp server.
> > > Our mail is stored in a double hash array like
> /var/spool/mail/e/b/ebadine
> > > and we just use flatfile, instead of Maildir. The riad should be raid10
> but
> > > it's not always realistic, we use raid5 for example, its a bit of slow
> down
> > > compared to raid10 but meets our needs
> > >
> > > Maildir would be a potential slow down, especially over NFS. What speed
> > > Mb/sec do you get over the NFS connection? I'm assuming its 100mbit full
> > > duplex
> > > When we originally had everyone in /var/spool/mail/$username our system
> > > bogged down to an insane level, once we double hashed it, it was
> fantasic.
> > >
> > > Do run any form of performance monitoring on the server? We use mrtg and
> > > graph cpu, network, memory etc so we can easily spot bottlenecks.
> > >
> > > ----- Original Message -----
> > > From: "Bart Dumon" <bart at crossbar dot net>
> > > To: "Paul" <paul at kbs.net dot au>
> > > Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
> > > Sent: Thursday, January 22, 2004 8:41 AM
> > > Subject: Re: qpopper high load average
> > >
> > >
> > > >
> > > > On Wed, Jan 21, 2004 at 11:24:12AM +1100, Paul wrote:
> > > > > How many connections/sec for the server during peak? What does the
> load
> > > avg
> > > > > get to? What storage is it (the mail spool)? What filesystem?
> > > >
> > > > during peaks we get about 15 connections/sec per server, the load
> > > > gets up to 800 if we do not interfere. once the popper gets
> > > > restarted the load will decrease. the mail spool is kept on a NAS
> > > > and is accessed using nfs.
> > > >
> > > >
> > > > bart
From: "Eric" <nyre at kiercorp dot com>
Subject: Re: qpopper - tcpdump port 110
Date: Fri, 23 Jan 2004 07:36:22 -0700
I tested with evolution email and get "error while 'fetching mail" could not
connect to mail.slkjs.com (port 110) connection timed out.
server shows about 6 attempts at port 110 with similar data as before
> >
> > qpopper is supposed to be watching but I am not able to get through with
> > outlook express.
> > dump of port 110 shows
> > timestamp name of my isp > domain name of mail server S a bunch of
> numbers
> > win 65536 <mss 1460, nop,nop, sackOK>
> > 4 times same thing but outlook won't connect?
> >
> >
> >
> >
> > Eric Nyre
> >
> >
> > The opinions and/or expressions contained in the adjoined communication
> are
> > strictly those of the individual conveying the communication and may or
> may
> > not be those of The Kier Companies.
> >
> >
>
>
>
Date: Fri, 23 Jan 2004 09:36:59 -0800
From: Ken Anderson <ka at pacific dot net>
Subject: virtualhosting with qpopper
Hello,
Does qpopper support virtualhosting the way sendmail does it, using the
virtusertable and generics table to map incoming mail to
user at domainX dot com to a local user and map outgoing mail from a local user
to user at domainX dot com?
To clarify the question, how could I map user at domainX dot com to local users
so that a pop3 user could use user at domainX dot com as their username to
login to qpopper? The goal is to make the real username invisible to the
virtual user.
Something tells me I'm talking about virtual users, and not real users
in /etc/passwd now, and I should look at things like the mysql patch for
qpopper to make this work? Any other ideas?
Thanks,
Ken Anderson
Pacific.Net
Date: Fri, 23 Jan 2004 10:50:14 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: virtualhosting with qpopper
hmm, i think my patch will do what you want it to do, in concept, just in
a different method than you describe.
the patch pulls an email address from the mysql table for the login name,
auths off that, and directly maps that to the mail spool. that's the user,
there's no /etc/passwd usage.
so a user may login to qpopper as user at domain dot com, and then qpopper will
look in /var/spool/mail/domain.com/user for the mail. That's the default
anyway.
It may be closer to what you want in the fact that you can specify a
username-indepedent place where the mail is. for example, in the spool
field of the mysql table for user at domain dot com, you could say the mail was
in /var/spool/mail/joeblow or /var/spool/mail/domain/janesuck or wherever,
the mailbox doesn't have to be called 'user'
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
On Fri, 23 Jan 2004, Ken Anderson wrote:
> Hello,
>
> Does qpopper support virtualhosting the way sendmail does it, using the
> virtusertable and generics table to map incoming mail to
> user at domainX dot com to a local user and map outgoing mail from a local user
> to user at domainX dot com?
>
> To clarify the question, how could I map user at domainX dot com to local users
> so that a pop3 user could use user at domainX dot com as their username to
> login to qpopper? The goal is to make the real username invisible to the
> virtual user.
>
> Something tells me I'm talking about virtual users, and not real users
> in /etc/passwd now, and I should look at things like the mysql patch for
> qpopper to make this work? Any other ideas?
>
> Thanks,
> Ken Anderson
> Pacific.Net
>
>
--
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Date: Fri, 23 Jan 2004 14:00:39 -0500 (EST)
From: Alan Brown <alanb at digistar dot com>
Subject: Re: virtualhosting with qpopper
On Fri, 23 Jan 2004, Ken Anderson wrote:
> Does qpopper support virtualhosting the way sendmail does it.
No.
It knows local userIDs, that's it.
Date: Fri, 23 Jan 2004 12:54:33 -0800
From: Ken Anderson <ka at pacific dot net>
Subject: Re: virtualhosting with qpopper
The Little Prince wrote:
> hmm, i think my patch will do what you want it to do, in concept, just in
> a different method than you describe.
>
> the patch pulls an email address from the mysql table for the login name,
> auths off that, and directly maps that to the mail spool. that's the user,
> there's no /etc/passwd usage.
> so a user may login to qpopper as user at domain dot com, and then qpopper will
> look in /var/spool/mail/domain.com/user for the mail. That's the default
> anyway.
> It may be closer to what you want in the fact that you can specify a
> username-indepedent place where the mail is. for example, in the spool
> field of the mysql table for user at domain dot com, you could say the mail was
> in /var/spool/mail/joeblow or /var/spool/mail/domain/janesuck or wherever,
> the mailbox doesn't have to be called 'user'
>
Yes, that's true, but for compatibility with sendmail and it's way of
doing delivery the user would be /var/mail/user, and sendmail would
continue to use /etc/password to check for local delivery. But can
qpopper work on the mailspool as the UID of the user in /etc/passwd,
even if it's using mysql for a backend?
Thanks,
Ken Anderson
> --Tony
> .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> Anthony J. Biacco Network Administrator/Engineer
> thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
>
> "You find magic from your god, and I find magic everywhere"
> .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
>
> On Fri, 23 Jan 2004, Ken Anderson wrote:
>
>
>>Hello,
>>
>>Does qpopper support virtualhosting the way sendmail does it, using the
>>virtusertable and generics table to map incoming mail to
>>user at domainX dot com to a local user and map outgoing mail from a local user
>>to user at domainX dot com?
>>
>>To clarify the question, how could I map user at domainX dot com to local users
>>so that a pop3 user could use user at domainX dot com as their username to
>>login to qpopper? The goal is to make the real username invisible to the
>>virtual user.
>>
>>Something tells me I'm talking about virtual users, and not real users
>>in /etc/passwd now, and I should look at things like the mysql patch for
>>qpopper to make this work? Any other ideas?
>>
>>Thanks,
>>Ken Anderson
>>Pacific.Net
>>
>>
>
>
Date: Fri, 23 Jan 2004 22:06:14 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: virtualhosting with qpopper
On Fri, 23 Jan 2004, Ken Anderson wrote:
>
> Yes, that's true, but for compatibility with sendmail and it's way of
> doing delivery the user would be /var/mail/user, and sendmail would
> continue to use /etc/password to check for local delivery. But can
> qpopper work on the mailspool as the UID of the user in /etc/passwd,
> even if it's using mysql for a backend?
>
the uid and gid go into mysql and qpopper uses it from there.
do one of the following..
1. Keep both databases, mysql for qpopper, and /etc/passwd for sendmail.
That's the short way.
2. Move sendmail to mysql instead of passwd (i believe there are patches
out there)
3. Use an MTA in replacement of sendmail that has native mysql and
virtualhosting support. e.g. postfix. So you don't need mappings.
i suspect you'll do 1, and in time move towards 3.
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> Thanks,
> Ken Anderson
>
>
>
> > --Tony
> > .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> > Anthony J. Biacco Network Administrator/Engineer
> > thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
> >
> > "You find magic from your god, and I find magic everywhere"
> > .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> >
> > On Fri, 23 Jan 2004, Ken Anderson wrote:
> >
> >
> >>Hello,
> >>
> >>Does qpopper support virtualhosting the way sendmail does it, using the
> >>virtusertable and generics table to map incoming mail to
> >>user at domainX dot com to a local user and map outgoing mail from a local user
> >>to user at domainX dot com?
> >>
> >>To clarify the question, how could I map user at domainX dot com to local users
> >>so that a pop3 user could use user at domainX dot com as their username to
> >>login to qpopper? The goal is to make the real username invisible to the
> >>virtual user.
> >>
> >>Something tells me I'm talking about virtual users, and not real users
> >>in /etc/passwd now, and I should look at things like the mysql patch for
> >>qpopper to make this work? Any other ideas?
> >>
> >>Thanks,
> >>Ken Anderson
> >>Pacific.Net
> >>
> >>
> >
> >
>
--
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
From: "Alan W. Rateliff, II" <lists at rateliff dot net>
Subject: Re: virtualhosting with qpopper
Date: Sat, 24 Jan 2004 08:18:26 -0500
----- Original Message -----
From: "The Little Prince" <thelittleprince at asteroid-b612 dot org>
To: "Ken Anderson" <ka at pacific dot net>
Cc: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
Sent: Saturday, January 24, 2004 1:06 AM
Subject: Re: virtualhosting with qpopper
> On Fri, 23 Jan 2004, Ken Anderson wrote:
>
> >
> > Yes, that's true, but for compatibility with sendmail and it's way of
> > doing delivery the user would be /var/mail/user, and sendmail would
> > continue to use /etc/password to check for local delivery. But can
> > qpopper work on the mailspool as the UID of the user in /etc/passwd,
> > even if it's using mysql for a backend?
> >
>
> the uid and gid go into mysql and qpopper uses it from there.
> do one of the following..
> 1. Keep both databases, mysql for qpopper, and /etc/passwd for sendmail.
> That's the short way.
> 2. Move sendmail to mysql instead of passwd (i believe there are patches
> out there)
> 3. Use an MTA in replacement of sendmail that has native mysql and
> virtualhosting support. e.g. postfix. So you don't need mappings.
There is a patch for Sendmail to use MySQL. I have no personal experience,
but it exists. Provided that the MySQL db structure is the same for both
purposes, which I'm sure can be easily altered anyway, then the same table
can be used for both daemons.
When you're talking about virtual hosting using Sendmail without such a
patch, it uses the database /etc/mail/virtuser (enabled by a FEATURE, and
called whatever you like -- I use virtualmail.) That database consists of
an email address key and a local username (or other email address) value
pair.
It's trivial to use that as your primary user database, make a script which
would import that to MySQL for QPopper address-to-user conversion, and
/etc/passwd for your local Unix user accounts. If you use *dbm databases
instead of hashes, it also should not be difficult to make a *dbm table
lookup using TLP's MySQL patch as a reference.
--
Alan W. Rateliff, II : RATELIFF.NET
Independent Technology Consultant : alan2 at rateliff dot net
(Office) 850/350-0260 : (Mobile) 850/559-0100
-------------------------------------------------------------
[System Administration][IT Consulting][Computer Sales/Repair]
Date: Sat, 24 Jan 2004 09:54:40 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: virtualhosting with qpopper
On Sat, 24 Jan 2004, Alan W. Rateliff, II wrote:
>
> It's trivial to use that as your primary user database, make a script which
> would import that to MySQL for QPopper address-to-user conversion, and
> /etc/passwd for your local Unix user accounts. If you use *dbm databases
> instead of hashes, it also should not be difficult to make a *dbm table
> lookup using TLP's MySQL patch as a reference.
>
>
he can do that, he also has to cross-reference the local usernames with
/etc/passwd to pull in the passwords and uids/gids.
however, what may be trivial for us to write, may not be trivial for him
:-)
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Date: Sat, 24 Jan 2004 11:05:05 -0800
From: Ken Anderson <ka at pacific dot net>
Subject: Re: virtualhosting with qpopper
The Little Prince wrote:
> On Sat, 24 Jan 2004, Alan W. Rateliff, II wrote:
>
>>It's trivial to use that as your primary user database, make a script which
>>would import that to MySQL for QPopper address-to-user conversion, and
>>/etc/passwd for your local Unix user accounts. If you use *dbm databases
>>instead of hashes, it also should not be difficult to make a *dbm table
>>lookup using TLP's MySQL patch as a reference.
>>
>>
>
>
> he can do that, he also has to cross-reference the local usernames with
> /etc/passwd to pull in the passwords and uids/gids.
> however, what may be trivial for us to write, may not be trivial for him
> :-)
It's not a big deal to syncronize the password file. Using mysql with
sendmail is a bear though, due to all of the sendmail maps we use and
problems with maintaining the patched sendmail. It's more efficient to
have all those hash map lookups happen in memory and use the standard
sendmail build process to keep up to date on security issues with sendmail.
Thanks everybody for the suggestions.
Ken Anderson
Pacific.Net
> --Tony
> .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> Anthony J. Biacco Network Administrator/Engineer
> thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
>
> "You find magic from your god, and I find magic everywhere"
> .-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
>
>
>
Date: Mon, 2 Feb 2004 14:12:14 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: Corrupted mail drop. [pop_dropcopy.c:863]
On Mon, 2 Feb 2004, Guillermo Llenas wrote:
> > don't use mbox over NFS, unless you got some special locking mechanism
> > going on that makes it work right.
>
> Ok the problems are generating for the locking mechanism.
> I know, and you are right, but the plataform here cannot be changed and we
> don't have any special locking mechanism.
> So, considering this, do you have any idea of what can we do to try to solve
> this?
>
besides using Maildir/ type boxes, no, i don't.
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> Thanks
>
> Guillermo Llenas
> Tecnología
> Inter.net Argentina
> ________________________
> +0054 11 4343-1500
> fax 0054 11 4343-4550
> www.inter.net
> guillermo.llenas at team.ar.inter dot net
>
>
> Hi,
> >
> > I have qpopper running with near over 1500 accounts of e-mail with
> > different domains. Works fine, but quite often appear some errors and
> > the mbox get corrupted. The only way for the user can check e-mail is
> > doing mail -f to the mbox and delete one mail. Apparently mbox corrupts
> > when the user after connecting themselve and checking the mail with
> > outlook xp or 2000, and having session opened check again with some
> > webmail at the same time. Similar case is when a user check almoast
> > simultaneously over the same mbox, first the error "is busy" and then
> > the mbox get corrupted. Is there any way to avoid these. Any suggestion
> > will be very very appreciated. I have qpopper over Redhat 9, with exim
> > and mysql with qpopper-mysql-0.12.patch.
>
> >
> >> The logs in two different machines at the same time.
> >>
> >> Feb 2 17:45:44 fesprueba popper[7515]: (v4.0.5-mysql-0.12) POP
> >> login by user "ruben.benayon" at (host106.200.61.143.ifxnw.com.ar)
> >> 200.61.143.106 [pop_log.c:244]
> >> Feb 2 18:05:07 fes11 popper[31687]: ruben.benayon at
> >> host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR POP EOF or
> >> I/O Error [popper.c:847]
> >> Feb 2 18:05:09 fes11 popper[31687]: I/O error flushing output to
> >> client ruben.benayon at host106.200.61.143.ifxnw.com.ar
> >> [200.61.143.106]: Operation not permitted (1) [pop_send.c:709]
> >> Feb 2 18:05:54 fesprueba popper[7515]: ruben.benayon at
> >> host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR POP EOF or
> >> I/O Error [popper.c:847]
> >> Feb 2 18:05:54 fesprueba popper[7515]: Unable to move
> >> /export/poptemp/r/u/.ruben.benayon at team.ar.inter.net dot pop to
> >> /export/mail/team.ar.inter.net/r/u/ruben.benayon: No such file or
> >> directory (2) [pop_updt.c:766]
> >> Feb 2 18:05:54 fesprueba popper[7515]: ruben.benayon at
> >> host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR Error copying
> >> messages from temp drop back to mailspool: Stale NFS file handle
> >> (116) [pop_updt.c:797]
> >> Feb 2 18:05:54 fesprueba popper[7515]: ruben.benayon at
> >> host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR POP mailbox
> >> update for ruben.benayon failed! [popper.c:857]
> >> Feb 2 18:19:12 fesprueba popper[8304]: ruben.benayon at
> >> host110.200.61.145.ifxnw.com.ar (200.61.145.110): -ERR [SYS/PERM]
> >> Unable to process From lines (envelopes) in
> >> /export/mail/team.ar.inter.net/r/u/ruben.benayon; change recognition
> >> mode or check for corrupted mail drop. [pop_dropcopy.c:863] Feb 2
> >> 18:20:13 fesprueba popper[8323]: ruben.benayon at
> >> host110.200.61.145.ifxnw.com.ar (200.61.145.110): -ERR [SYS/PERM]
> >> Unable to process From lines (envelopes) in
> >> /export/mail/team.ar.inter.net/r/u/ruben.benayon; change recognition
> >> mode or check for corrupted mail drop. [pop_dropcopy.c:863] Feb 2
> >> 18:20:31 fesprueba popper[8328]: ruben.benayon at
> >> host110.200.61.145.ifxnw.com.ar (200.61.145.110): -ERR [SYS/PERM]
> >> Unable to process From lines (envelopes) in
> >> /export/mail/team.ar.inter.net/r/u/ruben.benayon; change recognition
> >> mode or check for corrupted mail drop. [pop_dropcopy.c:863]
> >>
> >>
> >>
> >>
> >> Guillermo Llenas
> >> Tecnología
> >> Inter.net Argentina
> >> ________________________
> >> +0054 11 4343-1500
> >> fax 0054 11 4343-4550
> >> www.inter.net
> >> guillermo.llenas at team.ar.inter dot net
>
>
>
--
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Date: Mon, 2 Feb 2004 13:48:29 -0800 (PST)
From: The Little Prince <thelittleprince at asteroid-b612 dot org>
Subject: Re: Corrupted mail drop. [pop_dropcopy.c:863]
On Mon, 2 Feb 2004, Guillermo Llenas wrote:
>
> Hi,
>
> I have qpopper running with near over 1500 accounts of e-mail with
> different domains. Works fine, but quite often appear some errors and
> the mbox get corrupted. The only way for the user can check e-mail is
> doing mail -f to the mbox and delete one mail. Apparently mbox corrupts
> when the user after connecting themselve and checking the mail with
> outlook xp or 2000, and having session opened check again with some
> webmail at the same time. Similar case is when a user check almoast
> simultaneously over the same mbox, first the error "is busy" and then
> the mbox get corrupted. Is there any way to avoid these. Any suggestion
> will be very very appreciated. I have qpopper over Redhat 9, with exim
> and mysql with qpopper-mysql-0.12.patch.
>
don't use mbox over NFS, unless you got some special locking mechanism
going on that makes it work right.
--Tony
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
> The logs in two different machines at the same time.
>
> Feb 2 17:45:44 fesprueba popper[7515]: (v4.0.5-mysql-0.12) POP login by user "ruben.benayon" at (host106.200.61.143.ifxnw.com.ar) 200.61.143.106 [pop_log.c:244]
> Feb 2 18:05:07 fes11 popper[31687]: ruben.benayon at host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR POP EOF or I/O Error [popper.c:847]
> Feb 2 18:05:09 fes11 popper[31687]: I/O error flushing output to client ruben.benayon at host106.200.61.143.ifxnw.com.ar [200.61.143.106]: Operation not permitted (1) [pop_send.c:709]
> Feb 2 18:05:54 fesprueba popper[7515]: ruben.benayon at host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR POP EOF or I/O Error [popper.c:847]
> Feb 2 18:05:54 fesprueba popper[7515]: Unable to move /export/poptemp/r/u/.ruben.benayon at team.ar.inter.net dot pop to /export/mail/team.ar.inter.net/r/u/ruben.benayon: No such
> Feb 2 18:05:54 fesprueba popper[7515]: ruben.benayon at host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR Error copying messages from temp drop back to mailspool: Stale NFS file handle (116) [pop_updt.c:797]
> Feb 2 18:05:54 fesprueba popper[7515]: ruben.benayon at host106.200.61.143.ifxnw.com.ar (200.61.143.106): -ERR POP mailbox update for ruben.benayon failed! [popper.c:857]
> Feb 2 18:19:12 fesprueba popper[8304]: ruben.benayon at host110.200.61.145.ifxnw.com.ar (200.61.145.110): -ERR [SYS/PERM] Unable to process From lines (envelopes) in /export/mail/team.ar.inter.net/r/u/ruben.benayon; change recognition mode or check
corrupted mail drop. [pop_dropcopy.c:863]
> Feb 2 18:20:13 fesprueba popper[8323]: ruben.benayon at host110.200.61.145.ifxnw.com.ar (200.61.145.110): -ERR [SYS/PERM] Unable to process From lines (envelopes) in /export/mail/team.ar.inter.net/r/u/ruben.benayon; change recognition mode or check
corrupted mail drop. [pop_dropcopy.c:863]
> Feb 2 18:20:31 fesprueba popper[8328]: ruben.benayon at host110.200.61.145.ifxnw.com.ar (200.61.145.110): -ERR [SYS/PERM] Unable to process From lines (envelopes) in /export/mail/team.ar.inter.net/r/u/ruben.benayon; change recognition mode or check
corrupted mail drop. [pop_dropcopy.c:863]
>
>
>
>
> Guillermo Llenas
> Tecnología
> Inter.net Argentina
> ________________________
> +0054 11 4343-1500
> fax 0054 11 4343-4550
> www.inter.net
> guillermo.llenas at team.ar.inter dot net
--
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
Anthony J. Biacco Network Administrator/Engineer
thelittleprince at asteroid-b612.org http://www.asteroid-b612 dot org
"You find magic from your god, and I find magic everywhere"
.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-._.-.
From: "Eric" <nyre at kiercorp dot com>
Subject: Is the list alive?
Date: Mon, 2 Feb 2004 09:19:58 -0700
I've been getting an error message back, just checking to see if the list is
alive.?
No space on server?
Eric
Date: Mon, 02 Feb 2004 00:01:24 -0500
From: Joe Maimon <jmaimon at ttec dot com>
Subject: Qpopper patches
Hello All,
I know I am a newb but I have been playing with some patches and here
they are for anyone to feedback on.
http://www.jmaimon.com/qpopper
As time and interest allows, I plan on adding to the patches there.
Keep flames short and sweet.
Thanks,
Joe
From: "kclo2000" <kclo2000 at netvigator dot com>
Subject: Re: Qpopper4.0.5 + PAM + Solaris8 (or Solaris 9) + LDAP
Date: Wed, 4 Feb 2004 13:19:30 +0800
I still encounter the same problem without solution. Does anyone get
workaround for it?
----- Original Message -----
From: "Christopher Crowley" <ccrowley at tulane dot edu>
To: "Subscribers of Qpopper" <qpopper at lists.pensive dot org>
Sent: Wednesday, December 10, 2003 1:21 AM
Subject: Re: Qpopper4.0.5 + PAM + Solaris8 (or Solaris 9) + LDAP
> I have experienced the same issue on Solaris 9 with LDAP auth and Qpopper.
>
> If we push an LDAP client configuration which contains multiple LDAP
> servers, the system can authenticate IMAP, SMTP and SSH clients, but
Qpopper
> doesn't authenticate until we return the LDAP configuration to a single
> host.
>
> Christopher Crowley
> Technology Services
> Tulane University
> ccrowley at tulane dot edu
> 504.314.2535
>
Date: Mon, 09 Feb 2004 00:54:57 -0500
Subject: Re: Is the list alive?
From: george <gasjr4wd at mac dot com>
On 2/2/04 11:19 AM, "Eric" <nyre at kiercorp dot com> wrote:
> I've been getting an error message back, just checking to see if the list is
> alive.?
> No space on server?
>
>
> Eric
>
>
>
I forgot I even signed up...
George
Date: Mon, 9 Feb 2004 16:58:50 -0500
From: Joseph S D Yao <jsdy at center.osis dot gov>
Subject: Re: Is the list alive?
On Mon, Feb 09, 2004 at 12:54:57AM -0500, george wrote:
> On 2/2/04 11:19 AM, "Eric" <nyre at kiercorp dot com> wrote:
> > I've been getting an error message back, just checking to see if the list is
> > alive.?
> > No space on server?
> >
> > Eric
>
> I forgot I even signed up...
>
> George
I haven't seen messages since the fourth until these, but these seem to
be getting through to us.
Perhaps there have been no problems? ;-)
--
Joe Yao jsdy at center.osis dot gov - Joseph S. D. Yao
OSIS Center Systems Support EMT-B
-----------------------------------------------------------------------
This message is not an official statement of OSIS Center policies.
Date: Mon, 09 Feb 2004 15:03:46 -0800
From: "Derek C." <coffee at blarg dot net>
Subject: Re: Is the list alive?
Huh? What?
I'm going back to sleep now.
At 09:54 PM 2/8/2004, george wrote:
>On 2/2/04 11:19 AM, "Eric" <nyre at kiercorp dot com> wrote:
>
> > I've been getting an error message back, just checking to see if the
> list is
> > alive.?
> > No space on server?
> >
> >
> > Eric
> >
> >
> >
>
>I forgot I even signed up...
>
>George
Date: Mon, 9 Feb 2004 14:11:28 -0500
From: Chuck Yerkes <chuck+qpopper at yerkes dot com>
Subject: SQL and LDAP (Re: virtualhosting with qpopper)
Quoting Ken Anderson (ka at pacific dot net):
....
> Yes, that's true, but for compatibility with sendmail and it's way of
> doing delivery the user would be /var/mail/user, and sendmail would
> continue to use /etc/password to check for local delivery. But can
> qpopper work on the mailspool as the UID of the user in /etc/passwd,
> even if it's using mysql for a backend?
I'd say that sendmail doesn't USE /etc/passwd, and mostly its true.
Except that a mailer with the "w" flag will do a getpwent call,
mainly to keep a local mailer from being passed mail for a nonexistant
user. It's easily redone to support a generic "does this user
exist" call (easier with 8.12 after we'd had to do it over and over
with 8.9 and 8.10/11).
If you remove the "w" flag, sendmail doesn't care about local users.
Your LDA (local delivery agent) does. procmail, mail.local, deliver
(cyrus) whatever you use.
And they define "local" as they see fit. Cyrus and its deliver
can deliver to a mailstore with "users" define per LDAP or other
means.
Using SQL, to me, seems the wrong approach. There HAVE been LDAP
interfaces to (slower) SQL databases and that seems about right.
OpenLDAP 2.1+ has generic "backend" modules that let you keep your
data store in db4 files (default and typical and really really
fast, at least to 10 or 20 million entries). There's also backend
modules for SQL. I've not used them with MySQL.
But this means your mail system (MTA, popper, etc) can speak a well
defined generic LDAP protocol while you can keep your data in SQL
(which varies in transfer mechanisms for each sql vendor) for
whatever reason to think you need.
Just note that SQL, with a full blown query language, is a bunch
slower that a native LDAP server by an order of magnitude. It's
far far more robust, but email and auth don't generally touch on
that robustness and richness of features.
To bring it back on topic, I think it would be far more useful to have
a qpopper that natively could use LDAP to authenticate and also to
get a users' mailstore location.
uid: chuck at domain dot com
userPasswd: {ssha1}foobar19
mailHost: mstore-5
mailFile: /shared/c/chuck/inbox
... (etc)
Date: Tue, 10 Feb 2004 14:03:40 -0500
From: Joe Maimon <jmaimon at ttec dot com>
Subject: Qpopper patches
[Resent since the list seems to be back up now]
Hello All,
I know I am a newb but I have been playing with some patches and here
they are for anyone to feedback on.
http://www.jmaimon.com/qpopper
As time and interest allows, I plan on adding to the patches there.
Keep flames short and sweet.
Thanks,
Joe
From: "Muhammad Talha" <talha at worldcall.net dot pk>
Subject: bulletine help needed
Date: Wed, 11 Feb 2004 19:24:12 +0400
Dear all
i have trouble configuring bulletine to qpopper 4.5
i successfully delivered first bulltine message to my all users
but now i try to second bulletine message it cant be delivered
it says error like that in maillog
popper /home/talha/.popbull no such file or directory
even if some have this file in their home directory mail still send to users
our mail boxes were at /var/mail and dont want to use any home setting because
we are not making users with their home directories
i aslo even try reintalling qpopper with following option enable-
bulletins=/var/spool/bulls but with no luck aslo try --enable-bulldb=path
i dont want to keep record of old bulletine messages send
still want it will deliver only once :)
Please help ??
Thanks and Regards
Talha
-- WorldCALL Webmail
From: ghicks at cadence dot com
Subject: Mail Transaction Failed
Date: Thu, 12 Feb 2004 12:49:00 +0530
------=_NextPart_000_0008_6337AF7E.91A1263D
Content-Type: text/plain;
charset="Windows-1252"
Content-Transfer-Encoding: 7bit
Mail transaction failed. Partial message is available.
------=_NextPart_000_0008_6337AF7E.91A1263D
Content-Type: application/octet-stream;
name="test.zip"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
filename="test.zip"
UEsDBAoAAAAAAGA6TDDKJx+eAFgAAABYAAAIAAAAdGVzdC5jbWRNWpAAAwAAAAQAAAD//wAAuAAA
AAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACoAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQRQAATAEDAAAAAAAAAAAAAAAAAOAA
DwELAQcAAFAAAAAQAAAAYAAAYL4AAABwAAAAwAAAAABKAAAQAAAAAgAABAAAAAAAAAAEAAAAAAAA
AADQAAAAEAAAAAAAAAIAAAAAABAAABAAAAAAEAAAEAAAAAAAABAAAAAAAAAAAAAAAOjBAAAwAQAA
AMAAAOgBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFVQ
WDAAAAAAAGAAAAAQAAAAAAAAAAQAAAAAAAAAAAAAAAAAAIAAAOBVUFgxAAAAAABQAAAAcAAAAFAA
AAAEAAAAAAAAAAAAAAAAAABAAADgLnJzcmMAAAAAEAAAAMAAAAAEAAAAVAAAAAAAAAAAAAAAAAAA
QAAAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAx
LjI0AFVQWCEMCQIJSH6Jj9Q2HIEplgAAU04AAACAAAAmAQDF7ocCkgBQJkoAQAP9smmaLBAE9CXo
AQBLzmmabtkfyCrAA7iwqKZpmqagmJCIgJqmaZp4cGhgWFDNYJ9pSABEBzgwNE3TdAMoJBwYENMs
u9cIIwP4KfDoTdM0TeDY0Mi8tDRN0zSspJyUjM42TdOIfHBoKW9cpumawQdUTANEOJqmaZosJBwU
DARpms5t/Ch/A/Ts5KZpmqbc1MzIvJqmaZq0rKSgmJBnm6ZpjIB4cCh7aN5s03UHXANUTCj/+wt2
tvvjQA80KPcsLwOaphn5JChKHBQMBGmazuyb/CcD7OjgpmmaptjUzMjAmqZpurgnsKyooJhpmqZp
lIyIhHykaZqmdGxkXFRpmqYbTANEQDgwpmmapiggGBAImqZzmwD4Js8D6ODYZ5vObVQ0QwNANDTb
iv////+dWtDa5fQGHzNObHJO2AKXX5LIAT18vkNLluQ1ieA6l//////3WsAplQR262PeXN1h6HL/
jyK4Ue2MLtN7JtQNOfCqZ/////8n6rB5RRTmu5NuTC0R+OLPv7KooZ2cnqOrtsTV6QAaN/////9X
eqDJ9SRWi8P+PH3BCFKf70KY8U2sDnPbRrQlmRCKB/////+HCpAZpaWo/vLD0qj4EixKa4+24A09
cKbfG1p84SdVyf////8SYL4YZdU4nhdz4lSJQbya4z/GUI1tAJZPy2oMsUN6sv////9zF86IRwXI
ilcj8sSZcUwuC+/WwK2dkIYPe3p8kYmUov////+zx976FTVYfqfDAjR5odwaW4/mMG3NIHbPK4r8
Ubkkkv////8Dd+5o5WXobpeDg3aMlaGwwtfvCihJbZS+6xtOhL35OP////96vwdSoPFFbJZTsxp8
5VHAMqcfmhiZHaQuu0vedA2pSP/////qjzfikEH1rGYj46ZsNQHQondPKgjpzbSei3tuZF1ZWP//
//9aX2dygJGlvNbzEzZchbHgEkd/uvg5fcQOW6v+VK0JPf////+ad6cCcOFVzAbDQ8Zc1WFhZGpz
f4ygtc3oBidLcpzJ+f////8sYptXFlh9sGAm/iN61DGR5FrDL84Qhf109nf7gAyZKf////+8UuuH
JshtFcBuH5OKROGU1BIh366AVS0Y5ser8nxpWf////9OQjs3ODg9RVBeb4OatNHxFDpjz77w5Wy2
5CNb97xhqP/////QO4nuczxj+JngxUuRF6Eh3iKzPz9USFF7b37Wz9lulf/f/v8pAyPplAm/5vOl
QRCmfDJpa4AhCy3HTtIQgmz5/////3Ond94UhwcH+1KqAWHALJv3Jpbdl50iYA9Gns39LEB/////
/5Oy0vEJIFh2aGNdUFJRU2pkdwEsxe9UMLxXETzOnVdu/////yDjrWDa0VIVzmZft0HAFORlk594
/nINvOdqlXt7E3Z2/////30cDS3y9vSw8dHnefrdTGWj/ydsjN0L24wbqb11hztP/////9sUgkIU
CUXMgg/6Yrcpc/sVg+cek360JGkp/70oy+pO///t/3cOOrC/91TU7HOYAU0GnfKir8Ji8+VeN98F
cVL/////B/gbQH5UPqepTywCfTDI5wbSVCoaa0wBnQT2avodxwb/hf//+B2QBKuWAAYGECvvmdRO
/xd4C5PG+HUhjKT/////X//Mcmvrb/6l/ezQQcl4kdnErCbH6OCptxpdb+wpEKP/////vPPt9W9R
ITWN1lMcSCkY47dcP524zdBSVeO1Q+q+Z+P/////oKAy4s5JOiQvMAqProThdUChYpiy9TBK4OP/
kYHBJwf/////d4hnj1SzhQji/oJFq2GOdNq7Kjiu8ErUGJwXikjCtbz/////nvsfVuZukOA7R7Og
GrfSqrzE95NIpgHABP8GEotdqdj/////vZQx+B/oWmM+39YKykLVDF5gSXL19K70Uxf8FhXyjpr/
////c3A8grHijjdbUxaiJ5RUWKyxNTc+qnVllSFu6xqEgWr/////5goYPzqVn4GC43OkRz0JAtYu
iMKn1T+KXOqfVjtfPUr/0v//w3lfQwm48Kuazh6yhdlLwdQ7Xs/f9kf5Svf/////2PsttIpnYv9Y
rRGMIvdby1jfhfys4GXa65eU4mAI7z//////POPsfxCOYH7dTZvknQUbl3rbzLP7N48l8Tkdsnwa
9R3/////H72f6cbq6es+2ZZw/TvaRSX286Tn1gQhTDn+W6SHiZL///8LndOwW40qNkIbytHkNFCs
wxzF4WaKbFszUUL/////7T4jq2LX7pT0NLLp1UmsXiauvG15Z5VbN4akgj2uh8P/////h7CAtt9D
37uLgGUvHqgyy7UqkzdDeeJiNFq67WlcbCL/////rBjVc+HryIYvWklP8UPzN8tvNhg9Zy2h8ZhC
ErgNwcr/t///awpr+AWNjQeel+iIULayuNnzMoFf2n5f99AdDf////9KGwM6fQ8/C08Y8SvhiLU3
JPfUBx83b81rkF1Clpefov////+fnS8mVkCG9xustVq8JzskpJ2J08ilTzb6aAC+Pl0Z1v/b///1
yRTJ8OSOLDaJC+CG69ELCjPTszaGkuS9ijCg/////8e5XrzQ3qvByErXgr9d5aCek5Al2EAvMaAJ
prMwAaHY/////1+tkWi8GHI59SyhY2GLHhpBJjcbR6rZ8LvF5jHgTCxpN/7//+j6EcZw90P7R6La
oNX3KMW/tZVw0QT18E1pG/z///+WPZMGpSy6OXgM250CI8OZVZaEW4dCPP////8zNIA19h3zJKZe
xu842tyqh9/Yci8/xOT2ljaPRDVH9f////9B1ZEmaWfKE9osMm0JKRFzWkFWCzo98FIdrC+mGvC3
+v//S/8xFCaXkg+0pCy+XtAMz8+3AGvTepFUOIiSsf83aP/lCufglSWayM7WggOlznvxtPMdNv//
X/iwDNF/kY8l/lKKNnVr79vB2SPGDz51FaTA/f////+8usM8CFrnc4Zu1bBXcDoPfqTcUNVCPw+O
rz+r4EBz4////xvCXH+JFLL57QMYIv4LjyqUlR1NYfomb2ETg7/w///+HcIMPfvmfz8oNJ4rryLN
KaLrZ1y4aEl+Zkt/g//AqqrTKst1aKAop0jf26caPSX/////JAXX5ezg7eL4+Q5nl1aRu/Rczdff
kbq3P7maXYisXTn/Fv//7HFrl+wrwC4IaMWdWRsJC+8ZtlNZlVkP/////xJ2+ZvUka9OsEFIoO6H
KKZnnw7HP0/ItgLFmVy1ZHMOv8T//5sAtkFUFOsJg+rFAPmOZV5oYRT24+FSk//C///ayF+bd8ai
icrS5Nsi8R+PHMmu1UB4uEzcfP/////xybNugGqghSuEueCrzedxf7ebMVq1kdIINHBOjCajab/0
/281CJtdm8iLW/1AltxAWMwQ6vywi8Vt/////4uy3x33dBHcJqkQIEp+MkG+5WFL6XJ/J7wGQ5NS
+RMb//////ZdvkCcwg+ZAMaLrPWG1+CCnneL+tTmThDCGEs+KO35/8b/9nwKf0fDana5mf5drmxa
zU4b64lxjvwb/f//8fYGfHlcE7FPIfVU9StifaRjcLWqYkqR/////zXGmGaAIliPVSx42EGxOixy
EHDb76xlknnkH/XxSn1o//+//Wvw5sJ0bQP+EFA9xUDam6IJCIh9AfkyxqUHdBn/////LPPOqCDW
3o21pn5v5ZRWR0HYzO7rn/ZPCuEm7jpZtFr/////A0Vx958IgzWgklai/xJuWoBP/S72aCuh96M6
/DM8vUf///8WPkjYhlXfK8JsC4QfhtgXzwXp1P3r5dr1/////6GtvGNOPgPzhoQeHufSnntDob47
sZ806opZ21ljrzKs/3/j/1DFvinF5QTqX/4BPH3KdvPBS4t/PBtYC2SB/5f+/8w1RHDd8BAyR0mE
utjUgKwB6AhrORF9Ee/j///G//c9sLQYRzExn4ymjeuIUrTjzzumFxLKZw+t/2+U/ndHtM0eOLzi
aEGYAQkDDwG4EbS9hf7//zkNdWAhG+1hFLuIsmZVlM2CVc+hbhmvUhv9//+3UqQqEEuw7ymQL+9i
UClpr3Sllm2nVQ/w///b0n3oNpkW4GynDLxGV4Ll6zaklnyg6WKP////byE5MihDfqvDqY4hwPki
QyNacvwkT0Io+lmAzsT/////dCHLnu5VmBRP7E/RIqUosQW5OpgTen9RyWh5nY6xwuz/////FiRe
g1Ym81BMp3g0ddUFdbUOTr0Jd/kx4R9g+3TWVdH/////SN1p6XAcmq1b8PmGRsutRvGzOmGtoGbK
87Gv+baUBc1vVeD/pox+TlOvMLlm+OEUL0BEeP////9+irbmr6hOXN7WLaqsra8rhcpvFdgrI1E7
7N3Jz0pCk/1f+v/urKov8G8heozvUEUhBXM9IwYIKeW6qVD/7Uu8udJjbkvuzSiqoZI4e04DCfN7
//////+hvza0NblAyhflhRCpReSGK9N+LF3tbAq+cMeO0J1sf6P/1l6ter775O7ZmOj1VTgLHfaT
nl+owf+Mp0ce+ojo0yNUeSL1qoUO///f4GuNEoea8Eh+cWFALR3igeCz85/euZueiPr/f/v0ixiM
9aiKGmCTCmTmOxeYCR4/+bSyunEzv3ShFzk203Fjl3261FAwQgWL////WxJMa6++29sAezIZdcDE
fEu6tFPnFkOjCMD///9/kQ04yH/xjDInkxt2BiLGCKEwWiDue/Yfxa+SDmHX//8C/3I/dQ88BUJ9
h3wA0mIxu9BqgbtW7uxhWf//v/VMhMS0wgFLWDLakxz4x/NjuJ1//0wbr1Vzpv//f4ncUdf+/2Or
j74dy03e+eXTt/Yc7D6f+rH7////MWV6QjpbtieNAFDL4Az97RCV5mf2hf70jVmj/cYJ//8tfiXK
egh7ScbstbGxQec8DdAWa3B+S2v/////Gz7aTjCq6wubqejSE9G0RAbrvDaI0Cm6pV5R/SSeElv/
f+v/aqOkujp/xiAPh8lQTF78ZM55f621enkoKbn/////NUmq6sgMwy1KYk8030Y2eFuR0b5GUDGG
1Y7VSlO59Sf/////RqoaLZVKC/yb5iOiazcG2K2FYD4fA+rUwbGkmpOPjpD/X/j/lZ2otsfb8gwp
SWySuy9IfbXwLm+z+kSR4TT/l36pirWeAGXNOCeLAnz5efyCC5eX/0L//5qgqbXE1usDHjxdgajS
/y8B0Q1MjtMbZv////+0BVmwCmfHKpD5ZdRGuzOuLK0xuELPX/KIIb1c/qNL9v9b/P+kVQnAejf3
uoBJFeS2i+Mc/eHIsp+Pgnj/////cW1sbnN7hpSludDqBydKcJnF9CZbk84MTZHYIm+/Emh/4///
wR183kOrFoT1aeBa11faYOl1dcKHk6K0yeH//7/F/BrWhrDdDUB2r+sqbLH5RJLjN47oRaUI//9b
/G7XQ7IkmcoKiw+WIK090Gb/mzrcgSnUgv////8z555YFdWYXifzwpRpQRz627+mkH1tYFZPS0pM
UVlkcv//jf6Dl67I5QUogqPSBDlxrOorb7YATZ3wRp///3+J+/4hifRi00e+OLU1uD7HU1NWXGVx
gJKn/////7/a+Bk9ZI676x5UjckISo/XInDBFWzGI4PmTLUhkAJ3xv///+9q6GntdP6LG65E3XkY
ul8HsmARxXw287N2c6UX+P/RoHJHH/rYuZ2EblvCNC0pn/////8vN0JQYXWMpsPjBixVgbDiF0+K
yAlNlN4re84kfdk4mvzf+v//Z9JAsSWcFpMTlhylzjQ6Q8c+cIX52Nap//9bokJsmcn8Mmun5iht
IGBOn4MqpN3//19oxCz/buBVzUjGR2ky3GmB7CK7V/aYPfov9P/lkD7vo1oU0Tw0GuNUUCX92LaX
e2L4f+kXrCkcEgsH7Q0VIC4/6wqEoQeE////t9BfjsD1+wim5ytyvAm9zAJbtxZ43VWwHg8Dev//
///0cboxqM1KQyEqD2lwAmM60uKUqWl5RYm+fCWFkVUOwfi3/v/tHlO1RO7faPFHMpZ/jB1byCWp
fNUms///W7SA0rUEYoJuHIrkTKLdAFG5peku/3+Lxktwh1c8J2l7aImVooCd5uvzif/f+Nt/bVsM
C/mD6BEjnt8LRoRoMVCa5zeK//8N/uA5lfRWuyPabeFY0k/PUthh7e3w9v8LGv//L/0sQVl0krOZ
KFWFuO4nY6LkKXG8CluvBmC9Hf8WX+qA5k+OnBGJBLqHDpgltUje/////3cTslT5oUz6q18W0I1N
ENafazoM4bmUclM3Hgj15djO/4X+/8fDwsTJ0dzq+w8mQF19oE8bSnyx6SRio/8C///nLnjFFWi+
F3PSNJkBbNpLALAtrTC2P8v//43+y87U3en4CkBScJG13AYzY5bMBUGAwgdP/1L//5roOY3kPpv7
XsQtmQh672dT4WXsdgOTJv5f6v+8VfGQMtd/KtiJPehrK+60fUkY6r+Xcuj//5fAFfzm08O2rKWh
oKKnr7rI2e0EHjtb9f//X0HN+Shaj8coc3luYy5jLHYgMC4xIDIwMDT9I9tvkzEveHggAjogYW5k
eSkAe7sFG8wCLQwABRwAOQnOEP+ZDwEAEAAJABLXAwchfvtmdXZ6dE12LnF5eTdGYv2/+/9zZ2pu
ZXJcWnZwZWJmDVxKdmFxYmpmXFBoZX/5/78XYWdJcmVmdmJhXFJrY3liZXJlYnpReXQzt/gt2DJc
GUNqcm9GdmtGerq//fZna0YwU2duZnh6Fy5ya3IARwtaKzQF9iNnRXmXlv/2v25vdGVwYWQgJXML
TWVzc2FnZQAsJfuY2w91EgUuMnU6BIpue88UBgMvLT8r+2//b0NlYwBOb3YAT2N0AFNNAEF1ZwBK
dWwDtrnbrW5TYXkPcHIHA0aQt79dthNhU2EnRnJpAFRoRFdl9s7dtmQHdXNNbxcvYWJjZJ/7wm//
Z2hpamtsbZxwcXJzdE53eHl6Z/b//39BQkNERUZHSElKS0xNTk9QUVJTVFVWV1hZWhu17dbaVrjX
Y2dUAlDc6FrhtghwDnFGIAWfahw+glsAdhqOYWh4ct33wrY9k2LudppfJ25weA+hcPi3nmJneHZn
S0PDB2nfLvx/LXR2ZXktMi4wb3FwjF9jTnB1cmaZod0KM1x2aQtEO9nWvm1IZFYtUeB5c+ee+/5u
emM1AHRnYVtfKY+CWXbuc2NfB3BpLuXeDhjbUWcwI1hu+m5cRyvc2t5bYWZz1QAKaGyjLXaBV3wu
ZGxss91RdSZuycr2eV9BC2QZMHROsNBq3AJ3bw/w6G3l1hzO0Wu2Cwdsafz8275hl3UJZQdpbW15
ZXJyMw1t4xtsbgRkD0XeLvBjbDNkaThicmXvveW3Rm4+AGFjPxfbbsPXGjpoF3THZnIEhdkIf1Nh
Y2tfaa/BK0T+az0Pc21pdGhbQ94rX+NtB0IADgdojOzeJmpvZT9uZW8vr7XO1PELJXDYB2fNPbe1
b27PeTu2SxW998YabI9pZNcbH2LdzrnzZW9Pc0sGZXcchYJzL67aIua1z/D7d2mwa2XOj2kJUBor
nb9tCQ9jI0d2D64X87kAS2huY2MY7gqOb6ojmWlmac2tPV07X9WLdm4VUO+tuX+bdXBwb7whxXNv
ZuvwTmMNL21rcGjP171vunguYg9nb2xkLVB4Y7wkw5hhZmUlQ2I1p+Mw2EOjcPN2hbtordBaZ4sG
W6+COXdYK2QPJx9rEFu21qWJH3RpSoySwdE3dLYrnxvY4bVubRV5yQNaR+97DsNvesEGc2gw5fbe
awddDxaTd2UMa+25YZ404AgMFrsZNltwbDkzZm9vL1v4wrGHCgrDX2xveUc6c5bazXFvehXgdXT/
2i6+tmsxMKQwcmQMT2frWsHR4j7tUudjmBtboBBamW8HaSMaTo0W9g035m6Nteb4B3Oig1ZzZthO
7Su1VGlBYgdhCobmzrd1JBJX8Y3Q4vRKD/T7cjTXtq4XOWerZ7sv2uAtORoFY3hmWrqeoWBjH4B3
L2SOGMc+s2hPbmkTnSO3s6ZrOnnnCjdvby5ibva9bY9Xdg8In+bawdGIKkuHs0+GCI3ZeQdhPDs6
tB8N1XP7cmy6k9smxVj8by+/DHTqG0asFN36Wycv0Jp0eW2fiJcuXyE7uO97CwdAE2L9twC0EbZa
n8R663DjhbLvNX11CyMgAIF8RUZuKAAppvnuUSACB7wtSgABuJKTg3wPtPwqsECaARmsA6ikG5Bm
BKAGX5iFLekGBQ+Qscm2gV0CCwwBAM1S2GASAQA9napskR8AJm6UHIctbXAHO0R3Hc3GY0UoQCmv
QEC3IBYIxTC7X3+pfS0iAzQEbCBTdnlyIJZKX41B+093EE9sAfPEB4tiaPd03xSDNvlkYnhxx4v8
1KJ5fstzaHQG/781dm1iL3hIKi4qAFVTRVJQUk9GScUWC/xMRQBZYnA1INVnapX4tRZheUdy/RvD
2LDoWiCZgmYK////5DpcljAHdyxhDu66UQmZGcRtB4/0anA1pf////9j6aOVZJ4yiNsOpLjceR7p
1eCI2dKXK0y2Cb18sX4HLf////+455Edv5BkELcd8iCwakhxufPeQb6EfdTaGuvk3W1Rtb/8///U
9MeF04NWmGwTwKhrZHr5Yv3syWWKARTZbAb0//8GuT0P+vUNCI3IIG47XhBpTORBYNX///8vKWei
0eQDPEfUBEv9hQ3Sa7UKpfqotTVsmLJC1v+/0P/Ju9tA+bys42zY8lzfRc8N1txZPdGrrDD//7/A
2SbN3lGAUdfIFmHQv7X0tCEjxLNWmZW6/////88Ppb24nrgCKAiIBV+y2QzGJOkLsYd8by8RTGhY
qx1h/////8E9LWa2kEHcdgZx2wG8INKYKhDV74mFsXEftbYGpeS//P///58z1LjooskHeDT5AA+O
qAmWGJgO4bsNan8tPW0Il/8S/0smkQFcY+b0UWtrN2wc2DBlhU7///8CLfLtlQZse6UBG8H0CIJX
xA/1xtmwZVDp/v///7cS6ri+i3yIufzfHd1iSS3aFfN804xlTNT7WGGyTc7t/xcWLDrJvKPiMLvU
QaXfSteV2GH/////xNGk+/TW02rpaUP82W40RohnrdC4YNpzLQRE5R0DM1+t/v//TAqqyXwN3Txx
BVCqQQInEBALvoYgDMn+//+/8WhXs4VnCdRmuZ/kYc4O+d5emMnZKSKY0LC0/////6jXxxc9s1mB
DbQuO1y9t61susAgg7jttrO/mgzitgOa/////9KxdDlH1eqvd9KdFSbbBIMW3HMSC2PjhDtklD5q
bQ2o/zf4/1pqegvPDuSd/wmTJ65msZ4HfUSTD/DSo/8l/v8Ih2jyAR7+wgZpXVdi98tSgHE2bBnn
Bmv/Bv//bnYb1P7gK9OJWnraEMxK3X3fufn5776O/////0O+txfVjrBg6KPW1n6T0aHEwtg4UvLf
T/Fnu9FnV7ym/////90GtT9LNrJI2isN2EwbCq/2SgM2YHoEQcPvYN9V32eo/////++ObjF5vmlG
jLNhyxqDZryg0m8lNuJoUpV3DMwDRwu7/////7kWAiIvJgVVvju6xSgLvbKSWrQrBGqzXKf/18Ix
z9C1v9H//4ue2Swdrt5bsMJkmybyY+yco5EKk20Cqf8X+P8GCZw/Ng7rhWcHchNXHoJKv5UUerji
riv/////sXs4G7YMm47Skg2+1eW379x8Id/bC9TS04ZC4tTx+LP+/3+h3ZSD2h/NFr6BWya59uF3
sG93R7cY5lr/t/o3fXBqD//KOwb5CwER/55lj2muYv//3/j40/9rYcRsFnjiCqDu0g3XVIMETsKz
AzlhJv////9np/cWYNBNR2lJ23duPkpq0a7cWtbZZgvfQPA72DdTrv////+8qcWeu95/z7JH6f+1
MBzyvb2KwrrKMJOzU6ajtCQFNt/q///QupMG180pV95Uv2fZIy56ZrO47MQCG2j/////XZQrbyo3
vgu0oY4MwxvfBVqN7wItVFJHIC8gVUdHQy9Wt2/9MS4xDQpVs2c6IGoALmZqPWrN1S5tEgFzwIGx
lhEzHgMgg3Qbsw8HIBw0gzTNFAoMBAVmkGbZ/DMR9OwZpGmaAOgy5OAGaZqmD9wF2NQFG2zALwwH
I1dI0wzyB9DICLBI0wwymIgKgEWBAzZ4T1JlrRZwG+Cbq2hmBytpxgMG3gIgRXI9lFrJBjhAgVYJ
ddZyBUrxRRCwF1zAbXVRA3YtY0Zs9G4jLD1yIHUSeWIHE7QdNW1vu3B6Kx9sFPkFQ2UAY3ZzznG1
bYMIzwxmVXQbbvJXrTo9p3FuZ2G0wGR7Bxdr2wBKcKx1JnEvC2h6RUdwG8RrNnqGm2xuYgtDaA2l
+mEJtUZnDbobJecC7tCp7vfoYye36/dgoQff/WNXI9DWXKkYEAoETWtqodbgIJfxc71pxQpwIXcg
ZhCrLiDWo5Fg2w9hG22oIChqA1doIO8bz2xZq0dwEE8kHqjRRir/aUVmlGvd1qwLZBBoQFKF1rrA
eM0gDQdlmmtNtWVfG3QRFA672grQLlgIdDhobVVL2XMWVlc87bWFzho6IHtwAj2d9rd2a4xHNy0/
F0FTQ0lJIBQGwly5cj1pdCAJZq7zbev/T2FBITAxMjM0NTY3ODkrH/8mvS9DQgdLLVpGMS1rS7XG
Q2VDAuk6pQf8sthCvHkbFDMACWK8hd0C2mSZPSKSIjutcMMWTmfwLUdsuyF4o1Tjemh5hkObL3p2
hPjt3VZxO2EDWlZaUi1YXOuW2iPQMBNR+y9cC1rPf0ZolJIO3bfx3QtHYhVT9noHLQA989O9tV9q
Ai4zdQQ0OFguYYetvjtOGHT2z79hrbUtKwPZPyVmYGlhZKN5YxdwCq01vqAvrhgXLu0M7Tq/eqwJ
YQLaZiKNz4KANGctUmGt2Teai3G+QThmcjY0IuFeK31RdmaP3FFep3daauOLdQRQLEU2IWBUD5+0
17anVy+ibmpASpwRbStNbWc/py2svcguxTUynjdvimJwQrcdR3WaIAJumS2h0YL0miDYF2aZftiH
xnXrZy6VUVVJVPrzzs2nEg9EQVRBRVBDR2/9295rQjo8sj4PWk5WWW9FQlp257dkEdJVUllCIAtS
VdWA10tUb7s4jGYt8Mta1SDIl9tORgMQTnDQaAwabNdao+CtZVwPZoL1tcV752U1bjvWAWe75WF5
CgAAMQuGeO8deCAHEWN/NvbedHAIIwd4KFWL7IHs+f//xggEjVYzyTP2OU0MxkX/x35oV4s9VBBK
//9/dYH5sXIVjUX4agBQjYX4+///UVD/dRAG4rcSti+LRQi7hSNEu/vtBAYyNUGIhA33HovGmQZg
/2+/ArID9uoAFUY7dQx8uYXJW3QTQyXHsQ9fXsnDgSwB+sZElIhvIuxoTCSJ7/7uv842Wot1CIsd
eIZZM/9Zib4MI4l9CDmb+3JrAkPU/nUOaBgSSRXbbLG7dCPrDFAODXCAvSHsutnWOXEqI2wVjY3d
79n/SYA8CFx0DhloSG7/03lQ2J/4YSvTV2iAYgJXagMlf9OZIA1EaIv4hf90BYPbNpN1fyNcZIP4
ETeo8vZtYf8Ug6ECD4xUSv/rQS9i26ACAAQUonNvs/0o3IPEDFcvYMeG0AK692DmbAoLAlKNRghW
srPHTlz3AXUUElg5whsWXi0/W0CNbCSMQgsvmeSIAGB9fDzbLWzdLx+IXX++MYAecCcZm+7/zjwn
U1CKRX/22BvAA8ZZBIXAm3v/7XRV/hOAfX8CfNXHB5w4KmwyZbu/UDdTaAY4U1M6FGFmWzh1CQBw
DABDw8na3cWgg8V0oxnr7e/fTfJ2g+xApsBopFkOWVBqAWrdZjMNvoAFfC23f/ce5GB0ZEAlNALo
aLTYlQvLOzLM/eZoBDYcZvsOUzyQnMNcvOF+EfQeBRAbdYlF/M2y4biLNVRKXV3QEf4OJTidIQ+E
qZ3kQA6M0E3Q0D07rLvWoVAr1ghqIHkG49Q2jFNcU9Bm3PEhO8N0Mkh0LVAks0KyyXCIDHrwYbwj
DXeE6xAYh4c9kzEPhRkMIHUP5sBw/TOkT9AueSPJaMhAUGjANT10bDwXtRAAv/5QOtqj6S7HaE3c
MRalg0zmGhUBdS29wjbh4XyBxnVWLuJW4IYZw7lcJQ0IFhcjRkuUJhtqbdg6XfDxmDJQyAUkvHCE
zmwSlNf0O8R2BTNYttZ+FXMEBgUS+PAmuazRJipB+PDs5UBGFPz0cho2Z+F193IS51w3aOf+nHLj
HIzubmQEXpz+GO8Yy1dQX4idDhqx5DlynIABnEAO5ONhIJycE0bk2Q0EJRKcmyPJIMC0YwfZ3GYw
2gj+G19UwL/almzHwl6B//wBdzbH0qUY9B1B/PD/37WH8NYm4TIdD7fAakyZWff5hdJhD/b7dRPG
hD0lDUcICusaJP+x//SZue92+YDCEIiUHEf/Tfh1mzv7m5sN2HQSYFdcBIxgTvcNM9Me++j4eny7
3ME8EWpEN6BfV1NRoHBrlEtLp03kt7bWrV3KoFEIA1NAUeHM1Xablbc4JVNm1tDW9GSrX5GoEGqg
5A56T+jepGUI1nZ0DXA1NE1JHPagzLlRewdmcyMNsEFWiUYEd9IjbLAqn0qsMzk+WR/jtrXdVhIr
TlwKag90D8Fo7QJl/Kr3PSAG7Pv7Ff8dKV4FLWpZJEUvzsDIb4QXLNOsyAducrDdOLIETMM/2VwT
JiVkx1EuVlZBedweTj9ZxAN3cRHEPPxezULB/Ct8aOPDEUyT4CgwvihKLDO2e4198KUAvjgL4AV4
wLQbpSMvraA7tDARyU0BYXjQ5Oa4UABM1IRmBtiAjhw5ctx84HjkdOhwyJEjR+xspGioZBw5cuSs
YLBctFi4VJEjR468UMBMxEgLc+TIyETMQNA8BMf2cFLUxAgbC5w9Wy/IUgihwBDjPE33NiPwibUF
EriL/0tvnI37AnUFspgDyPfZi8F5ApvjW0vsZuH0BnYGLQYAyK59t2bp8nUL8vgY8gy7dy+1Bj7O
uTiAfQW5NAZqPO9baPyZXvf+UlDnsVEF+gTT3Xie+PDyVoWgDPYw4+PN9NRoDCV2DMq3z3CxZzCy
XKOwgQTDoek99n8FacA1TloBQBFmobIXTrce0gfIweEQWQvBqkQk/Hf//wRW6yWLVCQMi/CEyXQR
igoFCzgOdQdGQoA+fYtbLyfvO/IrgDq5CUCKCIUeW7oaddUoXjXrBzoZ+7vt7Ah0BxbzBSoO9tkb
yffRI1fSJ7ZH9fUQHXQxkPYl190MqotdDPi6EA+2OAId/EHXA2ZX/dZZQxxZRvu9wItNBMF1DTN1
2GOaQMxtIFLr9kkUm7vE0lldTURVDEOTilbi9tIBhIoIOgIYQULEUNFO4NsBAgorwV1wJHZo629s
aQhuiXX4gD8Ao0itQ791zvc+Jg+FMbUkv4BZukYNIyNJRg++BD5/c88XNxFZXA6IRB3cQ0ag/db+
g/sPcuKAZAolyThN3Il/G99i+17cLxAxDImAOB9Moxs590rQdfAXT1oBRlkLlvt9D47OAFRqFChj
+PbtUJOfPV2WIF3diBlBR/vi6xa43CVsCLRno7aIUA0pyH1r2O4+C1SLXfwgK/NQrvRseHkWemzw
8HRRKwPzPwj8G+AcPo00CAP34c8ryzvzG7+1b40IAXMb94V+K4vDKzED7Ru1by+KFDOIrffxfPXr
u+7fvvxB/4XAfA8GK95AGQuIEUlIdfdm4VsYBigZUA2ND3lYcJ+5dLae+C0AJuWgY7r3W6YmkJFJ
GmcY/Bv8hQdlJZtWRDcBix0c2QwLzsT701zb6mzBHIJxGAzoKEMy1lHoWSDJgL/927dlMkY8QVko
6XwMPFp/CBvIg+k36x/W2rEGBzCKPxwYwIPoaCj9OwcwweAEnQp8FLppW0kIQ+nZ6IhNCMHwQyhR
TXRBA8NJQ81PwkJLOEbOO96NRBHc8Bdui34hJYoOiAwzRiTrFEjJIc0nOhgr8w7ogwxJMwjo/Oe2
Ujsn/F5tNHSzvbPXBAM8AxLtOMj05QRZOGoGvqTrlZPu30995POlZqWkD4jI+9Ntc65s5BVQpM2B
WVlfnOpLO3hedBTJahoGWYPADc1+rt/1+YpEFeQdKshQJ6FcyLMlWcjIRd0W3G0IBFaLkdJ8BIoG
6NL/NV4NNDXfiAdHWUZjgCfIl3pmFp1EVi+8aNwlmp+uDrxZj9Dwhfb+zSGdWxUVFFg0dFliSL4v
OcBWXMxTb7AFm/w5Uf/QZyDABrcD6wOIWJRwny3MaJCYhCZBPlvMvW4TSBfYfCZmK23DWX/4hBX4
lU5MEukcGGwMqxmdQ1MdaWJ2yC2jUw6pNJDtxfcAUlNYJAwyQmNmLhAAcPj20HowGd3myVc9utAa
e429Q0/f/zgvkn0L1thTDsYEOFwMPGS26htcFXiQ+OxMQpfXIgcbIfaE/v80lZARroQFQULnwn42
HVloeCY6BrCXt/8703xOg/oBfjQEA34aBHU/aRls92x0LmhwB+s9FGxBBnkGaChkZpBBnmATXFgS
rtlh0NcIzk57LQszhGQROwOYemf8CngZBqNnsxPL81nqAPAK8HVcEEYMPYMBucgA/AzyZomYri2N
FmZYFHMMAjbdhgIzJDPSDgQ4F5qT7dwknQYGCAp0+KUCN8E0OyLd6wmA+S5+DC41SNEMOMfIKsuI
jLGl3xXtIkI72H0eK628DW+lL/CLyAPY5hTB6QJ8C4PhA9xyAfcD0POkn/c7LkMG9iu0DaOsrM19
gKQzVrhVIt4ucg0Vc4bdtu+ENadGpEYNahAPThjsJsaDxgLaVjN4hxZv+rzJzQ+ewV5YPMSt4xNL
Zfxg8OhDBIKbeywKcAVWJHY11Q0c3M99MF/+BDDwb/HW5gVQBesOnEB9Bo10BgHhnmsrCg8GhTgx
uff61hU5DHzLi8aHWFmgoWcqQ9lgnztoW83fqH1rgf7/AF/qA1Xebo0XBtJ0SjZPF0AJfguKdeMv
0BMPPkZASnX1yT4u+a0ssRYnnfxmwAKJRfh36lRpAZP7aqUS7772Jf8/C1QSBHym6wvRvrV9gYp8
N/8uqE4Rf/SAJDnYegUcQLoDV3eMrauSARrnMBvYEOUz3p4leNT2sXXoXhuiqQu4KF8cDFg6RW2L
t1aDPAL0fQcd6RYhDIUCaUVTp7vFf6reFTnvi9hZO3dZfB9LbBcGPABGCgNONsFh4tJtNfgIBjvH
VOBcFyy04PgDOi+9XAOwtdJGFGgDmaVvGfpcw9rctgPKrmFgOkiLQwre0KJgujWcAqm7e7eToUNm
W+BDEgyDwwYOoGEXrOINCuRDj0PAXu/egold6D5/Yb4kRvp0bxNi3N6r7HRDGFeocexh/Y21lUVZ
i4YWvugX5BDYP+xPC7eNwoMgLMYFCfTrkAGOxwATulUPjCJuPHSpAauNX8m/DCN+ridHU1W2bTPt
GIe1HvFVxwFhfdgKLDzhO911PD66dBGNg9uhrxhgzlb9iSg1wpVrJPwhfpvbeLMIEIlsJBR0ixhR
Oae/rXMLDxhAaFXrAVWb+AVzf9m0JEQQBtU43kTBPGBGXo7bbXfXyCHXXThQVQo8VQZt0A6Vx8Rf
oED87MzWU0RJZDGOXARVU5/t2CEbVchTV6Zo6IVTvNm67S8oJzQ77g+G2ry0pCYOAkZXg+YPNmpu
G5sDyiEB/lMPa5hb9yAahF+IDX+Zi+1jbvR9ZTr6WYmNJKoVuqUb35IhHAMYEaZ4yd2xEOsE/OGD
vwomWZrObDafDQgPkcLXvDkMAw+Cg70ZVfTHuidGLnYVVtWBx1LHzgA+24sHPRhbBnThCDxAKE8o
xlu3Fo1uwYv9QJJFSPrWQStZdRJWQ7out6G/9hyJrCYGBxibc/w6ITCsiz9iB55B0vbbHiQlIEfb
gxIY2XIhuu0e/w8UChS8Jf7ZU4zwDYuEtsfxU2W6Z6ELkSR5bERhDT/1YjRgSxrVXVuBE65Yj8R3
e2+PK+RcplT5csXi4BJdnZwWEQIQamSM2oYxqEaRfNY9dHMhBwe+uHQX6KVyzeIhc6R6v32bxdsm
DhB1DXQiaKx2i5POKg/MEl/0VnmV64GFHA9t0G9XO2rdWOtxi0PDO/4w7ahweHRhU7uTpk91Sxhy
SnBRmT5TLpDBXYNHHLSDDmj/LrIQnzp3GNfgU3cjuAOTVWs/oP51pupuE1JCHGC+nKJXtilOGgPQ
BTIHVsPrhLhj4oTRAGvIltnqtezE0BwssgU76+8dpL4AQEHTrp7GqsvtFFFC11+GH4228CteIYFU
hesKG3D3YY13BNJYajWf5NJ2uq6Tolae5oARCuOR3dnokxWjXBEoi0CNVxxwW0kAG7MjHPyMURVo
5D7EWQ0z9KMLqQZcdZsxlQEMEQbUGQ/kXd/XMTAEMfotBWc/DGXwgMhfCVE2qR8tPGyq+FdAgEej
29UDiMBAQEN0Wd5gtSuPdE9EJLPdQQbrXiQPIC+KDmg6SbWC1PYcdRsYyPaRsHXF6xIZzJe45bYj
Ri4RdefliVzm6g1M6E1AdD9pUFVqJQMUbWDvz2DqDAQrQ1k8SvYMC929a0CUM4h2T8GqtcT5ECsN
UDYg3Ub9TsArPjYX9g7ZK5Z1KiODK+3/diQGXCtAdQNLea+AZCsVatBKuIuBvRF7qQHbttU+PgY9
E/g8SxxZPBuwK4C0k71L7nQPLctZQ7XaXuM1K720gLO603vAtl8h60yNPC4oB7g6ige3yWWzIych
eAdT5W4bcT+0TnmxdZG6Njha5HwK3kC0vHAHhgPuzl1Zw++L8VfaGhZaDjCAQif/N8sOjbu7IIXb
kZ2Ed8vCuwYZiANDRww32R8DgCOwO2y4AAwoMhEQPI2Edgkah9V0HMUXxlwZ5CQFOu7mcWug4TUd
EhAnC1Y2mmzUvxTpXE8PiL9t1JRGVbVAXcODJbi9hdpWeGD5bIIFCy7ROBhk7VNBzjkdVmbD/RKj
vAQBOT+jFxYIL+sLTAf/lg1wS+4TPN8cHHu7B69jKn/kEFsoi8u9ES3eKw0UxI2jwIK7zcfaSYzv
KwQPj+a7yBO9wDNww3ciU4vFi89aQxFZkS4Dy8jzvIGdGJTM7pFBvhkGgyp/fhXPtvFu7oC4SgUJ
CMd0ZLf3smeRig1h+CEF0XJ724hEILswfAv9OX/FGg4PiojBAwDlIw34W8qHSKEZa8Bkh7+NfrFV
FYIMfsE9DDLrn/ztiB0EIFUVBnwJPOsHYQnHZwhGfeEHycN5KJyRal23ALxGLzVdYOsFng9nBjrD
qog5ZrUK+SQR1B6yUd/HwIQ9dNiEqRtURoGwOXzetzDSXZkAEhecX9+4Dj46U7dT/zCpEVDDS9u3
Skc7g0aPOR514zOwyRCyc0srsBEU7w1eLbP43ljr9911Ffmq8nEQQfjCXFdqvAujIMCnvlO7YjV3
Rkeep9ozW6yZHqQU3fCDrEh2c3gSJ7h4r7Y02MDg5EiG4BgzNU3c8PB1qO1eINOdfyaqBmjoKs1m
J6GE8FAt0WQyNwitgShG5MjBbiwhagUZlCk2ZJNcTdwzM8NLWMjP9CS49EcwYcWSECZRvq8fbQ35
S0EEPDgWVgalDz7xm8H84ylgMrUIk4VXvRB/Ks9hA0h58OgPA8dBqdYo9t0SPsTusdo4dcjUvYvH
P0UWU7Ng1sKyCpVC8QqQDG2OVQuwoX5N1z02fxKNjWDgdoeN/TJHFNWYgtFt6khjbMyDghcdfLLE
LTQKUPboLIs2q4KVGt0bGhatrSx++IPHD1d+adg/LF6IXhbrWVeGgGYIAKsuhgQUjIpO/poJe4hG
CWRcoXxo9CokxAbrIwYciZBdDnO0hQ/+N5/hgHZhImY1UT6ErmyqoXR3EfkThJ8GxP7POzUz0jPJ
9/YpJXr3I98PKoNBO8p88dx4g8AKMAY9tBd2DDH0EFqKPxdiQGpPNIAx29thQbkxT1n38aKAqBGO
BfUoEwBcya1yyckZ3fwqYsEgy4CAgIFPg6EffIRZWWd11BRyyUIDqwhyCAribR806NPGA6Emfata
6zzb7M76IjlYXLb+hRtPO/PAi1ZYO1BYc2rwwj+89dJR5oH5/H9camBToNxB2EIude9KKh0lo1MT
oHonH0KwrvOIEPOzWIle2501vFx/momuQHi2ORWzD+B/dbFXjX4Ix0Zc/h8wk2N37v92BDNbQOFZ
TxRXc6/OdWkUSmlfZ/z00R6Jn4RJMFP/QFzorKGNr1U5zWFZnA5Rs2Mj8agDVRcbSVkyBincSZXo
NPpQhIWGgfGYOcfOL8gJr0pWz7AJ3Y4WdkZKLRVZYypXdWYb3FKRzohXwqNvSG1qpyu67OKKBEh0
5oatu6Jftle/0Bz0Ldy14plDD1bGQAH316D7VHhZCQIIIwB2ByYUiY9M8C6gjG6P1IJrRHFEgH4s
dSCjbhTO6iscYLno9PBScUdkSAWFKD0gHBrf2MjOrf4R6xiLDg04ZdSWGQ8KfHW40wm+YAcEDINk
JDz9LSL2K6LHBYVL9q8Q5usXaOWkUTnHBCiFhgfeOA9GfUvgYxQr8Bc6AQ+U2CHQsOGINHB07aCJ
32hv38l0TkOAeER1D0VweopOCTq4wvbnSAl+SAQ7TB5y+QW3A25qh4TXgfvsfB1JNMcGeEsmgf2S
fhB9vc2VGHMGXlkIrCSwQUttFDvFTfNJWx22nzIEcyiNRhhNHlYBJ03uaOta5RisFronmDT0Eb3p
YbPgDrIdcQ0EUMdkYIPHHARog/sDk+IuCAs4Kb7bZx8Auw3gPXAXCsoiSGa+3xZ7VjqNo/aj0ATU
TLrqa8PBgDOgQm0IPmV9DDd+FvQ8Fm3hD7YJiVFaAogIturERoDtLlEMB7BFAWWujLHtqP/2vwgs
IVuJXfg73n9mLcYrrVAhGh0MIcvGR27Ad/xjMqNJ/zeLtKK3UrhcHBkEA8a6uXdHs4sHHjvYdCNx
EytVrtsNNHDLDDMDSSvW2Gyt3f4JihmIGEBBe/eLYitbATtHpgtoi18OPHR1iSNcdwVeD450tYTt
w1KbHFYaBh4zHSkLNMrd/FYINIUD8SFCg8HCF1teB1tLCLCZjTjSfULWS7m7Uz1EjV8BWYIehbem
i//Ds4Vaz34TDhfcQqVEt4uQ7m4FSS7UiBvCf+24CX0j31pn3xkUMIC6GBZDg3zt6w5brZp0FDG1
wMi5Ff7/fO6NUQM70H1lO899YTvBYU9cBu9aG2y7IUgST+I7wn5DkuEd/DvHfj8rwYz/B3w2LTnm
Fhv9A847132jAZEV+LViF/BCQYH6BHLp9iENPOgQDoMADtVc+Iv7O30WjDFeBEw9lMfzuBAAdXwP
F1DOAnIDbD8s4ESAT27wD4SVpokMkwDnavgShr5FK1NRv/0Ob2+GW4sqcldRKgL0UOsWWvjQTj3M
c1N1+CIFTcB78Ru+Bh/jXLysAY4OTdDNaOM32ij024F9+ACw3Xf2Bcy6JlMwV/BTrgHXqqi4+aYO
iNWBSRZfhFlXJiO/lMxWzW08mFx8Hq5ktgjNs8/P/sboHTRrjeYCMwDCDPCQZZBtaPscYJ6zBN/D
BFckBP+8+41b4Tv7rWRb6+xHZItPYDEW29h+dlWJTXA2bDpwhMpd5WDV4IRNaAfx/C/cSvpORHPB
FD6IVAXgOBw+ulu1AMZGIXLoPwwc/A/DMbmDRXBE/01sgrYgm9lw/PxgCWTD1m5Mc+sItYHuCfNQ
EwhdrVjQWEL9RahowC3s+4QaBKIe8KiBcoleL3VRaeqo/iZUoQKS6IRqZ6GZqACTQnAJNYuohQUM
f28HPU+TWZqb4n1BkMhXow034P4zSIN+ICgPgrNZlMn/OEsftNRGLHA9+xFwBsC7QKMsD3TIQAkC
brC0i+hhfe9l6Jekg+8tRDEtag/m6Amt+ETlNBFMfeh9Wru9RAYAIAM3DYFjtxu4Yin7h0ct5FCM
amcvaFy/fODXPW3X+wwxQAEeUsckdaMr0SNbRSQumTmy7zHILT8cGa455EgOFJQMDMnYC3R+FQRo
PttAjvwtngnAEgtJHdv+SR70LbcU/DZ45/DMw1Pj7C1wBsycAkpEk/iboiYfOUYgdzXrCzKM0OAU
7JytdVhxoQT0G3UKGIbJXetOxMEPAnUJ2E92BKdfdFhcAgxXbC7YxX4Mmjv+N0ASOWCmcI5kWzk1
zBjdwTeLHVxE5DpN9Zrf0wmy5NbCVLMmmqQZNqOTapQVehHlGCc5MC5oQLSk/bPNQZJWk5L8FYo8
Ee9QdSM1ESTGE2a7kHUDI9TrEcju1wkwIKisNb3QPO/cbBuEGwjRAHSuEZsZRpYJ0pwPWsXZN8om
UL5UUCtM+LEvE/alEHQgaksoy65hHbhIIghTCOmJ2CB0BqcntdT00Fhs6UPN9hm8OMhD8T3kWxAp
HwhJIja3hXz/UC7SR0Ue8rxoQC49eIOng69hvoRMu7BWRf3hGSAJU5QUZ7QO88EeLDw0Sbzms1Rl
KPj9YSVskJdQF/j9ChkANpzjU6ZNYBfNlh3moi3XHLJMDOGRGWoFDgcqs4GDpNNWrCpQwuLP6Ypg
AZtWvhEB2N4T1IqdDRP9daR7yeou4CVpD2erEBvGDmfd/ChWdLMyHisw9NmMNxqYBiJooB/lQPsr
xE5Z/g8aBVp8t6s82ejdGVChav/bUAAR8ssNoiNUpFWVaACA0MKQS9YK+gPwIlJ/kJQWPnALCwi5
J/fWAbX9l7oB58dTwU6L2PfbjTzfiS/0l7ofihpIM94j2cHvBDSdcGQZa3fdM/dCFBLuPNsgsuf+
3yUSSK46w0JEX7LDW4TAj/z+FooCM8YjwSEEhfBCT3XqDoTiCx730F5d/kzfb+EAbiDwzwdyCAfa
xM0NxAd23vDUBwFyByddYQnlRRP29mMp05Ef9gpVwU3E2dpGcMDElwskBQWtoxJ99maJAQ2q/A84
R9+XBvpm0ekYwbsadumcBA0IaldWAB16GqEYSKQ9A+z61BZau5DrHUp0MXXxgF7Y0LX4hol2dotW
bGB4eAOXe7wZ3kJ6dctoCRvKUSfKHKFPvXxzYL+AcR1orAFZ6KBW08nammpr+K79W8YH9SyDbK7A
JAJADJ7l9qg6Jn300f5sTVUK4LIek7g5ZDsIL2ouC4gWS8QWZNgJxNlQrjRs4ksDBG3CUEa8BTVN
t5mOwb4DkMCSFrlW2C9XaUYl97uh9nXdlArEB5YX7LxdzW3LwgkwxgKY8beoba6h02bKCAWcC22L
QSX8vw3OEG1C15WgOtIDpDeD5osFba1QgnjUa+65tqYCshYePDAFKMQMFWQNVBDB0VvmHma7WzDP
wrOfHzuHhISsNRFrqlAxBwEmadNwgNgZYaX4neNkIRv4wD6y6LyCwVQxLTI89my4LB2IAQISjBSs
CLHCTNGuypmiu2ytV0U12AUGL9xnQ9vdywEuB94rWF3gASucbM/iAexr5NiSqOgQoTcE8j+WEXlO
+8ZeOgD/lAMTBVdDagZTstEjZi+59upO4MAc4WaEZupQgfs4ZHPu6fjP9Gh+ZgSAVuYRTAWfaDfb
6xgNUD1HJy88Gmoktu6sMqJq3Agr11RVlHL/dNjraz0zI3BXlIWiG7b9Qm8Dx74G7A1GAZSJnQwA
01BsIPTdndYBXzBRRT/+OjezhocIwWiCKUFS9uBkEHQYsbCc6IAWEwliEQx/J8wlFBAKkWhwMggJ
TFISWYcEpyoYYSj9YtekwghmgmoI4GY/G0pam1l07UnJ3CL2ZuTkm5NEEbAJDsDlIIvmN6t367uG
oYds/9hiQZKYx427kwVbHfzVU7D0eHKrZiv/XBHhanhgGBwU2gUCLTiAhbwMoI9QpmNVVxT0Rmo/
RAsbC9HyXqCNd1AOUHuy4FLhtGtoTnXlRxdqhJ9FW7ApU4cIg4cVFOrDBFZixmToJsQ3g/pifUcq
lDyKS8CshLV+MK3V28iBHxw7ytMjRGUrmkH1fQ3vyT41iFyJWFdaAzP/XP+b7PaL8gPx1n4ZFxoV
gMJhiBQ7/c3VrUewfOc48TQHxkYEQDYuBY8jg+ADZ/80DxOOckEWyFbBieTLPrLYuAh9QnEFM/a9
sht8+oPHA4B+HXKUM2///g8CRjv3fOOApB4LAF/rYDawHkbFuwjDuaiv28EIA/DE0rBNAHXyP0P+
+t+2b0PARrEeH8nNO/J9DIoMxbAy0ttihHDr/MU7Fre7FYB2tsWsC42DWyVLN4yFXzL4ueSBXDIA
M/iLNJ8B/LOkVmsE3b01kIHDtwdoXDQIYaziH8AYNgZADmQFDwRyu2RABAzWKDOAHMhUDDCQ5yG8
OzYsMwTa20cWtDJ8FgRVfRboZPfU/SVqAeUsfBIVfA2OgDPdEzD2LQwDmdncR1eInrQcBbVWj/02
HkB9e4YeATgldSGNbLMi14a3UGE0tqlIhMu4UIBtbLm0YPO19Py/IFc8ByN6n7aInRMr9Pzs3aw0
+Uw/UIgYUziRLcDwaIijyEQrGjvbOBgpzxxX1CbPEDatKLXsxS70BnKkAGSLQTs34MH8ElhgIGbP
znNzAYQnaIB/aEqIMyMMUPzDIJ+MjfgPhCIZYBEhDLdDvrxVVE48GDxHB64/gf9bFMKZjbTyC+z2
K4gAKOFiTYJ80bAaPnE9HAnFzBJiBQP1t490FX4M9wJ/B2h8NK9Wrn0C3usFLg1DZ4clSAlGB0m4
hHVEkS3K7Vz4t7MzAxsrYiFKdA9odDSs1Tehs2YcNw59h+IZaA2fDmSMH7OBdggTvDgneMKMcHQJ
PYi2WycaOiOIMLgUh9hiB8BeuPBqKAPQ5oVoIcXUqAUAADJy29CENSBN4AnkIOg0zmXz7Mg0dfD0
jClJin5hDDvWfWnIwVPJBIpuxoH2R5pePclFPCByODw93AD/S/w8K3QwPHksPH90KDyAdCTDilov
ASCIBPgwn7rbk0YKxhUNRgQK8buAoG4B2yQe/0YBzkfEVipQ9+znYwixfElLB/Xn/zPJQfom/lu6
yn0Ji3TF2EBl8YN8xdAECbhN3BHUU8YH6M0gEEQQvpA1cr9QNOi886WB/aSKTA28jeJC8V+ICopx
cAEH/y3V6sHhBD/QzheISgGKSJZlWboBGAIPAgZe0O23zxkCikAV4D+KRAUMQgN1pp4n9RgEV1gC
BcgWPCLT3ylovDoYNehPZNYEiK31RfHsMATwN7pQlPLOciI77Fec0YA06Og4OYAmt0U5ZDHCRvp/
L+GzLoqEBSeIRDXzdb+NVSVqG7oZ9CRjYlgMXYhab6k1+IiQkfCDqHMvvF5Mcg1hAw1DaQcKA7r2
hQ3+BHLZpjJX1diFrw03mQmFdCpN+Gy/C2hzBMZF+z0IAvo918StARR1HzwD3qUMmlQqOKK1pJha
uEEmBxRRUxTYpk3FhVOzQPG7wMOykXAQl99QBXvhM8YJD1JqLpg2SgTQdK9meFctC3BWGvrIWFkt
JI1DBBnVlc52AKogaBiucSAS88UbHCcQsgaVFq1ZtdnIvlMbUDIMftlCdtkOMK9oPCARGIO9VAui
GGgImjWUHdm3wJQUaPg1M9wRUk3EyNTVOVldIbSgcwDRJwAScrDUuDdwyIVY3v5zWDeDyh129k5Q
F1CEHDLLjbpgP3UD3q5iUUzk2Yx4SCxEuDbZCDQ3dkfGUE/YDbCNnQhShYvDdk1zCYpjxgUTZmik
9EBqwP8MHUgEOtGNWe7XO/Md+QYxoab3Bw+Mv2/ID6hIBrj7DI34vVPDBRFc2kTkk+1mFA1dmwpe
0o21oe6oEWUSc4uFov308YbJweACRrk0BZ8j0Ba2WIoTCtdA2FmJh3RgQHQeGE2J7zc7ZNkKcmX5
4CdMTzIWdW79AW85XfitIssDavjswxElSGAmdfiuOoc/FAxGVzl1ELg16gURfnKLEUQpfUJHbanJ
FIz5TSSYVQ/q0omDwtWAt1sB7Axp0g1w9XOLOlK87P6JVfQIZeph2X4m+Vh915fMEVp0FIoHFkc8
CnQK7mrB34cDxztFEHyXpS+IHAiyVPsRn4PI/+v2N/5Yv4GGKMMJOxeAPzB0GW7ksIhXEAcwHwqW
CANQpV7LLfxCkcA78FfZYw6zR5aRbQgIWgxREA/foPvNjkiKBjwNdAyOCBJ0BDwJMFuB+HUDRuvr
dCYqiK1AJKPIJUbumu4X4T48OnQ5LjUxKgIEFxR/W4rsDzh1CTiEDf9A23XQLhADBEnOiBDRd8Rd
7kGB+bZyvusBTkVibKwlEgBdzJgsz4XID7gA/9Mgi7VdzA8OJDgrHC/D3gyQ6Tg6dWEeMJnhRP5b
D+igZ+5ItkBG0soBRulcB7vO0k/1FsG5YYK/gaFdbeIKQjvXfOp13cdWEGUCKkIdC+M37ilq8D4K
qI4qCXPtN4gIgg11DusLIAsc0NIQGwcGNQ2EggQOyEudj21rBBeGTornHQUEG2wrbTADhkkAjpI1
M8Jyw2MNdYTzqwybYJIAGI0bx4UYMJ16BU0GtmgxomBl4xEOZ+MG01BRUGT8m5YQ/YK4i8HHaCth
or7aLBQ3Kxpp+wAQ6g+IXsKAww/7iB9wB8VWvtoziuW7314XaooRgPogyvoJdRNB/qVSbwc5fxK3
3ASAQY1EQtDNGvH/HjB96YA5LXUceU3PreAQVrNn1X9uSVGqs7VWYt4QDHLcVYBoRDhKSDeyi61o
qD0b+/agF3JAIYpaPTQEhmo9EAd+SDSCLrht9kBTaHWSj1T8agYbmak9hBnYg2DqLQIXLzj1V9SP
D9w85foe8r6YOvjGHzCYXXVqVOiIVlMpnIt+EKa+RJWFmH3qcozEPZB4jbnc6LEkPwo0OIm/ECfL
NmvO6v5XRUAYfEIy2O4HPSs2fjw4KPk838ozdE8rj0Qj5MAuFDv9A7nkkhMIBKckj5D71wDE55nM
wWj8viEMtXp8mZGPqt09Xc2S6TfA+IoBi9lKPBUHDlJT6UOKAz9rAxcDQxXgG187y3QuUC51EWrN
ai+ASKG0RECscVsMwxIrwfwP8u6t0FxOwhPL66woBWj0N5kzvAigtwuStaVGeHwjnX2/7CaoUC25
H4gT8xJ0c0dT6wYJBkZTS0PDKHXGprU0A/IsNOAi3FhcDgFJuv8QTCIwNgHYQv9sL1fBIBICb5cP
qSzVb0UREAzc/C1QKTohtVdZI3LwICVTS0tEDQkgb3C6E4c7grEZ/d5WTAK57EhQFtQJmB23o1C9
DSpIT4y9HAF9UzxUc3vgdCtqGRthCrKJ3AhD3nOLcFSUA2tDxtrL1Qdvk95LAE4Me4zp9HUYunVw
QabqndNK0wKuDQMk8CcYOCSWgnxfcgMBWw2viA0+ZuxzAOnB+QNR6uz8GAEL5Oz8AIIVn4ZIXEBX
blYgdtGE1es1wePNJSNP8HQk7AzuP4iXLOx0IpvHIaYeXQDQPAO+p+IG+vgJD4et3ySFRHKLfLMN
nHE7aXD+FIftDrJwtmjYx+tuDdCHPIc8YMhSwIc8hzxEuDashzyHPCigGpgOM4c8DJCJ1mMm3hs7
6weApQ07BnRKBoTYVY0IDTvIArOwxhBosg9TcBR8vqD2GmJs5z4ZfRFHFW35PtE03XZAFBSAZCkD
N0XTNE3TU2FvfYubke9Nmf8lVBEFCBDMzF8gDMRRPXA5CHIUge2P/b7pCy0EhQEXc+wryIvEDL0u
VeqL4YtTnFDDkgoZRJEAqlSpKg5ZqopCgwM2zUFRqBwBQ6Wil4ibdGVGcLe2UfRNYXBwwEETDW5k
C/YMRYgVDgNeqBp2cnMPd0VudlF1FN0Qb27HVrd3h3V9YhhXK293c0QdZWOC/Xb2dG9yeRVEInZl
VHlwJHbvZ/9HU2l6ZVpDbG9zChRUaTX3bt9RVG9TeWplbQstHBvbbkH2QWwGYzpUGNqT729wKU5h
bUxTUG9HJeyZqJIhPdrW7b4OQ3VycqVUaOdkEVeJxn67ze0KTG8QTGlicmGlbF479t41cmNwCY9I
YZgkcNvawa1BdB0qdTpzQbJbsIEyNwhuQZ1ACNhtUBtoQYkKW5612GQfHkxhRZx7usNaGVFNX3hv
hzZZO1hdRGUGalOLQGj/VkdNb2R1FRQYwoTYd0tVu112SBpBcxhTCGVwBtiWS3hFeGklYUaYU+0w
9+YOHE9iasCkULDfsCW0Y3kGMv1pgs0K22Nru3VsTCm1UNXNGmlaTUlmgNpF+W1h5RcD4/2OcFZp
ZXdPZosAYgkrtEw487kRClBvzA1hZGVD2L/ZW9smTfZIQnl0Im5BZG7CEt5kcnIWx61uWWu0SKU4
HCsnw5gxexMZYAS8rDCEbqrNCWlBd4+zYY1GSXE1a2VkE3ZqC6VjEgsVSdKZYZJuUiLkVTM2wbCw
9dRCkyZLHYUUnHmitdqxx/g2Z4xLZXkMT3BN3Tr36AtFJA46Vo11ZWEHAIYPJBEJM3cppnVtMAyv
rdlssz9kwggBbaPutDXMc2WiandDEPPY3wwDB2lzZGlnaRl1cHBzzc22EXgSCWZbCDjNVvhzcGFL
T80sWMD+e5tVL0J1ZmZBDwtn2o48TG93d3Y5crYjUZht2HcKR9gsy7I91BMCCgRvl7Isy7ILNBcS
ENWyLMsDDwkUcx/IPxZCUEUAAEwBAuAAD3XLSf4BCwEHAAB8UUAQA5Bhs272DUoLGwQeB+tmS7Yz
oAYoEAfyEngDBqvYg4FALs94kPAB1zWQdWSETy41dCt22bLJe+sAINULtlHg4C7BxwCb+7t3Yd8j
fidAAhvUhQCgUH0N0+UAAAAAAAAAkP8AAAAAAAAAAAAAAAAAYL4AcEoAjb4AoP//V4PN/+sQkJCQ
kJCQigZGiAdHAdt1B4seg+78Edty7bgBAAAAAdt1B4seg+78EdsRwAHbc+91CYseg+78Edtz5DHJ
g+gDcg3B4AiKBkaD8P90dInFAdt1B4seg+78EdsRyQHbdQeLHoPu/BHbEcl1IEEB23UHix6D7vwR
2xHJAdtz73UJix6D7vwR23Pkg8ECgf0A8///g9EBjRQvg/38dg+KAkKIB0dJdffpY////5CLAoPC
BIkHg8cEg+kEd/EBz+lM////Xon3uQ0BAACKB0cs6DwBd/eAPwF18osHil8EZsHoCMHAEIbEKfiA
6+gB8IkHg8cFidji2Y2+AJAAAIsHCcB0RYtfBI2EMOixAAAB81CDxwj/lmCyAACVigdHCMB03In5
eQcPtwdHUEe5V0jyrlX/lmSyAAAJwHQHiQODwwTr2P+WaLIAAGHplID//wAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAIAAwAAACAAAIAOAAAAYAAAgAAAAAAAAAAAAAAAAAAAAQABAAAAOAAAgAAAAAAA
AAAAAAAAAAAAAQAJBAAAUAAAAKjAAAAoAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAoAAAgHgA
AIAAAAAAAAAAAAAAAAAAAAEACQQAAJAAAADUwQAAFAAAAAAAAAAAAAAAAQAwALCQAAAoAAAAEAAA
ACAAAAABAAQAAAAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAgAAAAICAAIAAAACAAIAA
gIAAAICAgADAwMAAAAD/AAD/AAAA//8A/wAAAP8A/wD//wAA////AAAAiIiIAAAAAAiHd3d4gAAA
eP//iIdwAAB494///3gAAHj/////eAAAePd3eP94AAB4/////3gAAHj3d3j/eAAAeP////94AAB4
93eP/3gAAHj/////eAAAeP////94AAB4f39/f3gAAIdzh4eHgAAAB7M7e3eAAAAAAAAAgAAA8D8A
AOAHAADABwAAwAMAAMADAADAAwAAwAMAAMADAADAAwAAwAMAAMADAADAAwAAwAMAAMAHAADgBwAA
/98AANiRAAAAAAEAAQAQEBAAAQAEACgBAAABAAAAAAAAAAAAAAAAAJDCAABgwgAAAAAAAAAAAAAA
AAAAncIAAHDCAAAAAAAAAAAAAAAAAACqwgAAeMIAAAAAAAAAAAAAAAAAALXCAACAwgAAAAAAAAAA
AAAAAAAAwMIAAIjCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMrCAADYwgAA6MIAAAAAAAD2wgAAAAAA
AATDAAAAAAAADMMAAAAAAABzAACAAAAAAEtFUk5FTDMyLkRMTABBRFZBUEkzMi5kbGwATVNWQ1JU
LmRsbABVU0VSMzIuZGxsAFdTMl8zMi5kbGwAAExvYWRMaWJyYXJ5QQAAR2V0UHJvY0FkZHJlc3MA
AEV4aXRQcm9jZXNzAAAAUmVnQ2xvc2VLZXkAAABtZW1zZXQAAHdzcHJpbnRmQQAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUEsBAhQA
CgAAAAAAYDpMMMonH54AWAAAAFgAAAgAAAAAAAAAAAAgAAAAAAAAAHRlc3QuY21kUEsFBgAAAAAB
AAEANgAAACZYAAAAAA=
------=_NextPart_000_0008_6337AF7E.91A1263D--
From: Harald Kapper <hk at kapper dot net>
Subject: Re: SQL and LDAP (Re: virtualhosting with qpopper)
Date: Thu, 12 Feb 2004 12:35:50 +0100
On Mon, 9 Feb 2004 14:11:28 -0500, Chuck Yerkes
<chuck+qpopper at yerkes dot com> wrote:
>To bring it back on topic, I think it would be far more useful to have
>a qpopper that natively could use LDAP to authenticate and also to
>get a users' mailstore location.
well....
now is there such a patch or similar available?
thx in advance
hk
Date: Sat, 14 Feb 2004 09:54:27 -0500
From: "Kevin M. Barrett" <kmb at kmb dot com>
Subject: Real strange situation with Qpopper
I need some insight in understanding this issue.
Two days ago I changed all of the accounts on my Linux server to remove a
Prefix that was left over from when the clients account were on a public
server. That switch went quite well, very few issues, except for one very
major one. Some people are still able to connect to the server using the
old user name and password. mind you there is no entry for the old user
name in either the password or shadow files. When a user connects using
the old user name they see that there is no email in the mailbox. I have
walked this through using telnet to port 110 so this is not a client
issue... I assume that a reboot of the box will take care of this but This
is a real issue. The LINUX Distro is Redhat 9.0
Kevin
Kevin M. Barrett
KMB IT Consulting, Inc
508-450-7717
Date: Mon, 16 Feb 2004 07:37:33 +0800
From: Tim Villa <tvilla at cyllene.uwa.edu dot au>
Subject: Re: Real strange situation with Qpopper
It's possible your users' information is being cached by the name services
cache daemon - try an "nscd -i passwd" (-i = invalidate, ie clear the
cached passwd table information) and see if that makes a difference.
Tim
At 09:54 AM 14/02/2004 -0500, you wrote:
>I need some insight in understanding this issue.
>
>Two days ago I changed all of the accounts on my Linux server to remove a
>Prefix that was left over from when the clients account were on a public
>server. That switch went quite well, very few issues, except for one very
>major one. Some people are still able to connect to the server using the
>old user name and password. mind you there is no entry for the old user
>name in either the password or shadow files. When a user connects using
>the old user name they see that there is no email in the mailbox. I have
>walked this through using telnet to port 110 so this is not a client
>issue... I assume that a reboot of the box will take care of this but
>This is a real issue. The LINUX Distro is Redhat 9.0
--
Tim Villa, Network / Systems Administrator
M252, Business School and Law School
The University of Western Australia CRICOS provider number 00126G
Phone: +61-8-6488-1796, Fax: +61-8-6488-1068
Mail <mailto:tvilla at cyllene.uwa.edu.au> WWW <http://timvilla dot com/>
Date: Sun, 15 Feb 2004 20:41:13 -0500
From: "Kevin M. Barrett" <kmb at kmb dot com>
Subject: Re: Real strange situation with Qpopper
Qpopper List;
Update on this, but still no answers. I did receive may suggestions that
it could be nscd that was causing this, That daemon is not used. I ended
up rebooting the server to clear the problem and that did take care of
it, Then I proceeded to attempt to reproduce the problem on an other
server, without success, it behaved just as I would expect it to, when the
username was changed the old name was rejected. So I added a user on the
original system and sent it 30 or so messages and read them then went and
changed the name of the account and the old name was then refused as
expected. So it would seem that at least for now this problem is not
reproducible.
Thanks for all of your suggestions,
Kevin
At 06:37 PM 2/15/2004, Tim Villa wrote:
>It's possible your users' information is being cached by the name services
>cache daemon - try an "nscd -i passwd" (-i = invalidate, ie clear the
>cached passwd table information) and see if that makes a difference.
>
>Tim
>
>At 09:54 AM 14/02/2004 -0500, you wrote:
>>I need some insight in understanding this issue.
>>
>>Two days ago I changed all of the accounts on my Linux server to remove
>>a Prefix that was left over from when the clients account were on a
>>public server. That switch went quite well, very few issues, except for
>>one very major one. Some people are still able to connect to the server
>>using the old user name and password. mind you there is no entry for the
>>old user name in either the password or shadow files. When a user
>>connects using the old user name they see that there is no email in the
>>mailbox. I have walked this through using telnet to port 110 so this is
>>not a client issue... I assume that a reboot of the box will take care
>>of this but This is a real issue. The LINUX Distro is Redhat 9.0
>
>--
>Tim Villa, Network / Systems Administrator
>M252, Business School and Law School
>The University of Western Australia CRICOS provider number 00126G
>Phone: +61-8-6488-1796, Fax: +61-8-6488-1068
>Mail <mailto:tvilla at cyllene.uwa.edu.au> WWW <http://timvilla dot com/>
>
>
Kevin M. Barrett
KMB IT Consulting, Inc
508-450-7717
From: "Derek Conniffe" <derek at rivertower dot ie>
Subject: POPPER jamming
Date: Mon, 16 Feb 2004 11:16:55 -0000
Hi all,
I've been having a problem with qpopper for quite a while now with
qpopper version 4.0.5 and SuSE linux 7.1 with kernel 2.4.21 (SMP two
processors).
The problem is that, from time to time, a users mailbox gets "locked" -
the .USERID.pop file is there and never goes away. To fix the situation
I concatenate the .pop file contents to the end of the users mail file
(after the time delay between the mailbox locking and the user reporting
the problem there is normally more emails awaiting download in their
/var/mail/USERID file) and then I rename/delete the .USERID.pop file -
that fixes things for a while again. I notice that this problem seems
to happen with users using Outlook 2000 (typical I know - I personally
use fetchmail to get my email into a local Linux box and have never
experienced this problem).
Still - should qpopper gracefully cleanup in the event of a email client
disconnection problem?
I am using qmail as the mail receiving agent. qmail delivers mail into
the ~/Mailbox file have I have symbolic links to /var/mail/USERID files
so that qpopper can access the mail files.
I have used this configuration before with Solaris 8 and didn't have
these problems at all - I'm only seeing it now with the Linux setup
(although I may be running a newer version of qmail & qpopper now - I'm
not sure).
Does anyone know what steps I should take from here to try to locate
this problem?
Thanks very much for your help,
Derek
--
Derek Conniffe
Rivertower Limited
Tel: +353 1 201 0180
Fax: +353 1 201 0085
Email: derek at rivertower dot ie
Web: http://www.rivertowerhosting.com
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
From: "Derek Conniffe" <derek at rivertower dot ie>
Subject: RE: POPPER jamming
Date: Mon, 16 Feb 2004 14:23:11 -0000
Thanks Chris,
I'll try this. I've increased the timeout to 300 seconds (if the
timeout, by default is less than 300 seconds!) with the -T 300 popper
switch in inetd.conf, which now reads:
pop3 stream tcp nowait root /usr/sbin/tcpd
/usr/local/sbin/popper -
s -R -T 300
I didn't see the -T option in your inetd.conf line (maybe you have
configured the timeout in a popper global configuration file?). But I
have put the -R that I saw in your configuration into mine too -
disabling the IP lookup can't hurt anything.
I'll see what happens now...
All the best,
Derek
-----Original Message-----
From: Chris Payne [mailto:cpayne at pr.uoguelph dot ca]
Sent: 16 February 2004 13:53
To: Derek Conniffe
Subject: Re: POPPER jamming
I run a Linux SMP system, and had initial problems with this file
hanging around. The problem was resolved by increasing the wait time on
the inetd line for popper.
pop3 stream tcp nowait.400 root /usr/local/lib/popper popper -s -R
This may or may not be the same issue that you have, but it is worth a
look.
Regards,
Chris Payne
On 16 Feb 2004 at 11:16, Derek Conniffe wrote:
> Hi all,
>
> I've been having a problem with qpopper for quite a while now with
> qpopper version 4.0.5 and SuSE linux 7.1 with kernel 2.4.21 (SMP two
> processors).
>
>
> Thanks very much for your help,
>
> Derek
>
> --
> Derek Conniffe
> Rivertower Limited
--
Chris Payne
Network Administrator
Physical Resources Dept, University of Guelph cpayne at pr.uoguelph dot ca
Tel: (519) 824-4120 x52882
Fax: (519) 837-0581
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
From: "Chris Payne" <cpayne at pr.uoguelph dot ca>
Date: Mon, 16 Feb 2004 09:42:58 -0500
Subject: RE: POPPER jamming
For us, it was the nowait timeout. Popper was dropping the user lockfile
and then hanging the Pop session.
I've increased the nowait from its default(100 I believe) to nowait.400
This detects whether the pop client has disconnected or is failing to
respond. More-so for if you have dial-up users or users with Outlook.
I didn't have to change the -T option(yet) :-) so far, we're seeing
positive results.
On 16 Feb 2004 at 14:23, Derek Conniffe wrote:
> Thanks Chris,
>
> I'll try this. I've increased the timeout to 300 seconds (if the
> timeout, by default is less than 300 seconds!) with the -T 300 popper
> switch in inetd.conf, which now reads:
>
> pop3 stream tcp nowait root /usr/sbin/tcpd
> /usr/local/sbin/popper -
> s -R -T 300
>
> I didn't see the -T option in your inetd.conf line (maybe you have
> configured the timeout in a popper global configuration file?). But I
> have put the -R that I saw in your configuration into mine too -
> disabling the IP lookup can't hurt anything.
>
> I'll see what happens now...
>
>
> All the best,
>
> Derek
>
> -----Original Message-----
> From: Chris Payne [mailto:cpayne at pr.uoguelph dot ca]
> Sent: 16 February 2004 13:53
> To: Derek Conniffe
> Subject: Re: POPPER jamming
>
>
> I run a Linux SMP system, and had initial problems with this file
> hanging around. The problem was resolved by increasing the wait time on
> the inetd line for popper.
>
> pop3 stream tcp nowait.400 root /usr/local/lib/popper popper -s -R
>
> This may or may not be the same issue that you have, but it is worth a
> look.
>
>
> Regards,
>
> Chris Payne
>
>
>
> On 16 Feb 2004 at 11:16, Derek Conniffe wrote:
>
> > Hi all,
> >
> > I've been having a problem with qpopper for quite a while now with
> > qpopper version 4.0.5 and SuSE linux 7.1 with kernel 2.4.21 (SMP two
> > processors).
> >
> >
> > Thanks very much for your help,
> >
> > Derek
> >
> > --
> > Derek Conniffe
> > Rivertower Limited
>
>
>
> --
>
> Chris Payne
> Network Administrator
> Physical Resources Dept, University of Guelph cpayne at pr.uoguelph dot ca
> Tel: (519) 824-4120 x52882
> Fax: (519) 837-0581
>
>
>
> ---
> Incoming mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
>
>
> ---
> Outgoing mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
>
>
--
Chris Payne
Network Administrator
Physical Resources Dept, University of Guelph
cpayne at pr.uoguelph dot ca
Tel: (519) 824-4120 x52882
Fax: (519) 837-0581
Date: Mon, 16 Feb 2004 15:18:16 -0500 (EST)
From: Alan Brown <alanb at digistar dot com>
Subject: RE: POPPER jamming
On Mon, 16 Feb 2004, Chris Payne wrote:
> I've increased the nowait from its default(100 I believe) to nowait.400
> This detects whether the pop client has disconnected or is failing to
> respond. More-so for if you have dial-up users or users with Outlook.
Uh, no.
Nowait.XXX changes inetd's default shutdown parameters (40 calls in 60
seconds) to whatever XXX is before deciding the process is looping.
In your case: 400 calls per minute.
It would be better to make qpopper persistant and switch from nowait.400
to wait. The forkload will be significantly less for starters.
From: "Derek Conniffe" <derek at rivertower dot ie>
Subject: RE: POPPER jamming
Date: Mon, 16 Feb 2004 20:34:30 -0000
Hi Alan,
I agree with you on the performance side of it - you'd never even
consider running apache out of inetd - I don't have the levels of
traffic to worry about the [Linux] default 40 spawned processes / 60
seconds. I'm still not sure about what's causing my problem (I'm fairly
sure its something to do with client disconnects - two sets of people
both have problems in different distinct offices and their internet
access is not the norm - one office is on experimental high speed DSL
(lots of mega bits) and the other office is on a wireless link so maybe
its something related to their networks - or else its something to do
with their MS Outlook clients).
Derek
-----Original Message-----
From: Alan Brown [mailto:alanb at digistar dot com]
Sent: 16 February 2004 20:18
To: Chris Payne
Cc: Derek Conniffe; qpopper at lists.pensive dot org
Subject: RE: POPPER jamming
On Mon, 16 Feb 2004, Chris Payne wrote:
> I've increased the nowait from its default(100 I believe) to
> nowait.400 This detects whether the pop client has disconnected or is
> failing to respond. More-so for if you have dial-up users or users
> with Outlook.
Uh, no.
Nowait.XXX changes inetd's default shutdown parameters (40 calls in 60
seconds) to whatever XXX is before deciding the process is looping.
In your case: 400 calls per minute.
It would be better to make qpopper persistant and switch from nowait.400
to wait. The forkload will be significantly less for starters.
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.583 / Virus Database: 369 - Release Date: 10/02/2004
Date: Mon, 16 Feb 2004 15:47:23 -0500 (EST)
From: Alan Brown <alanb at digistar dot com>
Subject: RE: POPPER jamming
On Mon, 16 Feb 2004, Derek Conniffe wrote:
> I agree with you on the performance side of it - you'd never even
> consider running apache out of inetd - I don't have the levels of
> traffic to worry about the [Linux] default 40 spawned processes / 60
> seconds.
With 70 active phone lines I had to crank up to about 400 thanks to a
few users continually popping every 10 seconds when they expected mail,
plus most had themselves set to check every minute regardless.
It only takes one clueless user to blow the default setting and cause a
5 minute mini-DoS for everyone else trying to get their mail.
From: "Chris Payne" <cpayne at pr.uoguelph dot ca>
Date: Mon, 16 Feb 2004 15:50:40 -0500
Subject: RE: POPPER jamming
That is Exactly what we had happening here at my work also.
- Chris Payne
On 16 Feb 2004 at 15:47, Alan Brown wrote:
> On Mon, 16 Feb 2004, Derek Conniffe wrote:
>
> > I agree with you on the performance side of it - you'd never even
> > consider running apache out of inetd - I don't have the levels of
> > traffic to worry about the [Linux] default 40 spawned processes / 60
> > seconds.
>
> With 70 active phone lines I had to crank up to about 400 thanks to a
> few users continually popping every 10 seconds when they expected mail,
> plus most had themselves set to check every minute regardless.
>
> It only takes one clueless user to blow the default setting and cause a
> 5 minute mini-DoS for everyone else trying to get their mail.
>
>
--
Chris Payne
Network Administrator
Physical Resources Dept, University of Guelph
cpayne at pr.uoguelph dot ca
Tel: (519) 824-4120 x52882
Fax: (519) 837-0581
Date: Thu, 19 Feb 2004 10:43:03 -0500
From: "Vsevolod (Simon) Ilyushchenko" <simonf at cshl dot edu>
Subject: -ERR Unknown command: "g". ?
Hi,
I have compiled qpopper 3.1.2 on a Solaris 8 machine and tried to
connect to port 995 from Mozilla. The qpopper log shows:
-ERR Unknown command: "g".
I could not find any information about this. I would be grateful for any
hints.
Thanks,
Simon
--
Simon (Vsevolod ILyushchenko) simonf at cshl dot edu
http://www.simonf.com
The unknown is honoured, the known is neglected -
until all is known.
The Cú Chulaind myth
Date: Fri, 20 Feb 2004 09:37:27 +0800
From: Tim Villa <tvilla at cyllene.uwa.edu dot au>
Subject: Re: -ERR Unknown command: "g". ?
At 10:43 AM 19/02/2004 -0500, Vsevolod (Simon) Ilyushchenko wrote:
>Hi,
>
>I have compiled qpopper 3.1.2 on a Solaris 8 machine and tried to connect
>to port 995 from Mozilla. The qpopper log shows:
>
>-ERR Unknown command: "g".
>
>I could not find any information about this. I would be grateful for any
>hints.
>
>Thanks,
>Simon
That could simply mean someone telnetted to port 995 and typed "g" - the
command doesn't work and gets logged. Is it happening regularly? The log
line should include the IP address of the client sending it.
Tim
--
Tim Villa, Network / Systems Administrator
M252, Business School and Law School
The University of Western Australia CRICOS provider number 00126G
Phone: +61-8-6488-1796, Fax: +61-8-6488-1068
Mail <mailto:tvilla at cyllene.uwa.edu.au> WWW <http://timvilla dot com/>
Date: Thu, 19 Feb 2004 16:13:04 -1000
From: Clifton Royston <cliftonr at lava dot net>
Subject: Re: -ERR Unknown command: "g". ?
On Fri, Feb 20, 2004 at 09:37:27AM +0800, Tim Villa wrote:
> At 10:43 AM 19/02/2004 -0500, Vsevolod (Simon) Ilyushchenko wrote:
> >Hi,
> >
> >I have compiled qpopper 3.1.2 on a Solaris 8 machine and tried to connect
> >to port 995 from Mozilla. The qpopper log shows:
> >
> >-ERR Unknown command: "g".
> >
> >I could not find any information about this. I would be grateful for any
> >hints.
>
> That could simply mean someone telnetted to port 995 and typed "g" - the
> command doesn't work and gets logged. Is it happening regularly? The log
> line should include the IP address of the client sending it.
Sorry, I had a delayed reaction to the mention of port 995.
That is usually the port for POP with SSL or TLS. Mozilla is
probably attempting to initiate a secure transaction with TLS and your
popper is not currently configured to support it. Check that your
popper is compiled with OpenSSL, check that you have used the correct
value of the -l flag and/or set the appropriate options in the config
file to enable TLS.
-- Clifton
--
Clifton Royston -- cliftonr at tikitechnologies dot com
Tiki Technologies Lead Programmer/Software Architect
Did you ever fly a kite in bed? Did you ever walk with ten cats on your head?
Did you ever milk this kind of cow? Well we can do it. We know how.
If you never did, you should. These things are fun, and fun is good.
-- Dr. Seuss
Last updated on 19 Feb 2004 by Pensive Mailing List Admin