These are mostly only demonstative values on how to tune your system for different needs. They are not some kind of an ultimate optional values. This article mostly aims to provide a quick overview on the ways to fine tune your system settings and being aware of the limitations.
The name is a bit misleading, because it doesn't set the number of users on the system, but used in the formula to calculate maximal number of allowed processes.
You can find it in your kernel configuration file, something like this:
This is the default value, so if we look at the formulae we get process limit values:
/usr/src/sys/param.h: #define NPROC (20 + 16 * MAXUSERS) /usr/src/sys/conf/param.c: #define MAXFILES (3 * (NPROC + MAXUSERS) + 80)
So we got 532 for NPROC (maximal number of processes) and 1772 for MAXFILES (maximal number of open file descriptors).
Some say that the maxusers should be set to the amount of RAM in megabytes.
For reference, FreeBSD sets is automaticaly by this formula, but limits it's maximum to 384.
Setting it to 64 is always a safe bet if you don't want too much experimenting. Just change it in your kernel configuration file:
Compile the new kernel with build.sh or manualy, install the new kernel and reboot.
You can check your limits with sysctl:
With maxusers 32
$ sysctl proc.curproc.rlimit.maxproc proc.curproc.rlimit.maxproc.soft = 160 proc.curproc.rlimit.maxproc.hard = 532 $ sysctl proc.curproc.rlimit.descriptors proc.curproc.rlimit.descriptors.soft = 64 proc.curproc.rlimit.descriptors.hard = 1772
With maxusers 64
You can check your limits with sysctl:
$ sysctl proc.curproc.rlimit.maxproc proc.curproc.rlimit.maxproc.soft = 160 proc.curproc.rlimit.maxproc.hard = 1044 $ sysctl proc.curproc.rlimit.descriptors proc.curproc.rlimit.descriptors.soft = 64 proc.curproc.rlimit.descriptors.hard = 3404
So you can change the hard limits now. Let's see the soft limits.
or with ulimit:
$ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) 131072 file size (blocks, -f) unlimited max locked memory (kbytes, -l) 80920 max memory size (kbytes, -m) 242760 open files (-n) 64 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 2048 cpu time (seconds, -t) unlimited max user processes (-u) 160 virtual memory (kbytes, -v) 133120
You can set it with the file /etc/login.conf:
default:\ :path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/pkg/bin /usr/pkg/sbin /usr/local/bin:\ :umask=022:\ :datasize-max=3072M:\ :datasize-cur=1024M:\ :maxproc-max=1044:\ :maxproc-cur=512:\ :openfiles-cur=256:\ :stacksize-cur=8M:
Next time you start the sytem, all users belonging to the default login group will have the following limits:
$ ulimit -a coredump(blocks) unlimited data(KiB) 1048576 file(blocks) unlimited lockedmem(KiB) 124528 memory(KiB) 373584 nofiles(descriptors) 256 processes 512 stack(KiB) 8192 time(cpu-seconds) unlimited
You may set different limits for different user, thus different services:
database:\ :ignorenologin:\ :datasize=infinity:\ :maxproc=infinity:\ :openfiles-cur=1024:\ :stacksize-cur=48M:
You should run this command after editing your login.conf:
$ cap_mkdb /etc/login.conf
You can assign the newly created login class to the desired user by doing something like this:
$ usermod -L database pgsql
Let's check our limits again with sysctl:
$ sysctl proc.curproc.rlimit.maxproc proc.curproc.rlimit.maxproc.soft = 512 proc.curproc.rlimit.maxproc.hard = 1044 $ sysctl proc.curproc.rlimit.descriptors proc.curproc.rlimit.descriptors.soft = 256 proc.curproc.rlimit.descriptors.hard = 3404
Much reasonable for a modern system.
Shared memory and semaphores are part of the System V IPC. Using and fine tuning shared memory and semaphores can give you increased performance on your NetBSD server.
You can check it's settings with sysctl:
$ sysctl kern.ipc kern.ipc.sysvmsg = 1 kern.ipc.sysvsem = 1 kern.ipc.sysvshm = 1 kern.ipc.shmmax = 8388608 kern.ipc.shmmni = 128 kern.ipc.shmseg = 128 kern.ipc.shmmaxpgs = 2048 kern.ipc.shm_use_phys = 0 kern.ipc.msgmni = 40 kern.ipc.msgseg = 2048 kern.ipc.semmni = 10 kern.ipc.semmns = 60 kern.ipc.semmnu = 30
As you can see, the default maximum size of shared memory segment (shmmax) is 8 megabytes by default, but for a postgresql server you will most likely need about 128 megabytes.
Note, that you cannot set shmmax directly with syctl, but you need to set the value in pages size with kern.ipc.shmmaxpgs.
The default PAGE_SIZE is 4096, so if you want to set it to 128M, you have to do:
grimnismal# sysctl -w kern.ipc.shmmaxpgs=32768 kern.ipc.shmmaxpgs: 4096 -> 32768
So the formula is: 12810241024/4096 = 32768
You can make any sysctl change permanent by setting it in /etc/sysctl.conf
You can also get detailed information on System V interprocess communication (IPC) facilities on the system with the following command:
$ ipcs IPC status from <running system> as of Mon Dec 3 18:52:00 2007 Message Queues: T ID KEY MODE OWNER GROUP Shared Memory: T ID KEY MODE OWNER GROUP m 65536 5432001 --rw------- pgsql pgsql Semaphores: T ID KEY MODE OWNER GROUP s 65536 5432001 --rw------- pgsql pgsql s 65537 5432002 --rw------- pgsql pgsql s 65538 5432003 --rw------- pgsql pgsql
You can also force shared memory to stay in physical memory. This means that they will be never paged out to swap.
You may set this behaviour with the kern.ipc.shm_use_phys sysctl.
TCP uses what is called the “congestion window” to determine how many packets can be sent at one time. The larger the congestion window size, the higher the throughput. The maximum congestion window is related to the amount of buffer space that the kernel allocates for each socket.
So on high bandwidth line the bottleneck could be the buffer sizes.
Here's the formula for a network link's throughput:
Throughput = buffer size / latency
So if we reorganise it a bit, we get the formula of the ideal buffer size:
buffer size = 2 * delay * bandwidth
The delay is the network latency, which is most commonly known as "ping".
I think I don't have to introduce this tool:
$ ping yahoo.com PING yahoo.com (184.108.40.206): 56 data bytes 64 bytes from 220.127.116.11: icmp_seq=0 ttl=50 time=195.596 ms 64 bytes from 18.104.22.168: icmp_seq=1 ttl=50 time=188.883 ms 64 bytes from 22.214.171.124: icmp_seq=2 ttl=51 time=192.023 ms ^C ----yahoo.com PING Statistics---- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 188.883/192.167/195.596/3.359 ms
However ping(1) will give you the round-trip of the network link -- which is the twice of delay -- so the final formula is the following:
buffer size = RTT * bandwidth
Fortunately, there is an automatic control for those buffers in NetBSD. It can be checked and and enabled with sysctl:
net.inet.tcp.recvbuf_auto = 0 net.inet.tcp.recvbuf_inc = 16384 net.inet.tcp.recvbuf_max = 262144 net.inet.tcp.sendbuf_auto = 0 net.inet.tcp.sendbuf_inc = 8192 net.inet.tcp.sendbuf_max = 262144
The automatic setting for sendbuf and recvbuf is disabled in the default installation.
The default values for the maximal send and receive buffers are set to 256 KBytes, which is very tiny.
A reasonable value for newer systems would be 16 MBytes, so you may set it to that value after you turned it on with sysctl:
net.inet.tcp.recvbuf_auto=1 net.inet.tcp.sendbuf_auto=1 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.recvbuf_max=16777216
Just remember that your application has to avoid using SO_RCVBUF or SO_SNDBUF if it wants to take advantage of using automatically sized buffers.
RFC 6928 permits the extension of the initial window size to 10 segments. By default NetBSD uses 4 segments as specified in the RFC 3390. You can increase it by using the following sysctl's:
If you are seeing drops due to the limited IP queue (check the net.inet.ip.ifq.drops sysctl), you can increase that by using:
net.inet.ip.ifq.maxlen = 4096
If you still are seeing low throughput, maybe it's time for desperate measures ! Try to change the congestion algorithm to cubic using:
Or try to decrease the limit (expressed in hz ticks) at which the system fires a delayed ACK (for an odd numbered packet). Usually one hz is 10ms but you may want to double check using the kern.clockrate sysctl, and dividing one second to the value there. So, to decrease delack_ticks to 50ms use:
You may enable experimental buffer queue strategy for better responsiveness under high disk I/O load.
This options is likely to stable but not yet the default.
Enable them with the following lines in your kernel configuration file:
options BUFQ_READPRIO options BUFQ_PRIOCSCAN
NOTE: Trying to utilise heavy optimalisations can make your system hard to debug, cause unpredictable behaviour or kill your pet. Especially use of -mtune is highly discouraged, because it does not improve performance considerably or at all compared to -march=i686, and gcc4 can't handle it correctly at least on athlon CPUs.
You can put something like this into your mk.conf, when you compile your packages and your system.
FIXME: This is only for building world
FIXME: For packages
For more detailed information about the possible CFLAG values, please read the GNU C Compiler documentation.
"17.4. Managing Kernel Resources". PostgreSQL 8.3beta3 Documentation. PostgreSQL Global Development Group. December 2007. Retrieved December 2, 2007.(http://www.postgresql.org/docs/8.3/static/kernel-resources.html)
"Performance Tuning a NetBSD Server". Eric Radman. Retrieved December 3, 2007.(http://eradman.com/article/bsdtuning1)
"TCP Tuning Guide". Lawrence Berkeley National Laboratory. Nov 15, 2007. Retrieved December 4, 2007.(http://www-didc.lbl.gov/TCP-tuning/)