Linux IO Port Programming mini-HOWTO (part 1/1)

Linux IO Port Programming mini-HOWTO (part 1/1)

Post by Riku Saikkone » Wed, 03 Jan 1996 04:00:00



Archive-name: linux/howto/mini/io-port-programming
Last-modified: 1 Jan 96

-----BEGIN PGP SIGNED MESSAGE-----

*** The `Linux IO Port Programming mini-HOWTO' is posted automatically by the
*** Linux HOWTO coordinator, Greg Hankins <gr...@sunsite.unc.edu>.  Please
*** direct any comments or questions about this HOWTO to the author,
*** Riku Saikkonen <r...@spider.compart.fi>.

- --- BEGIN Linux IO Port Programming mini-HOWTO part 1/1 ---

        Linux I/O port programming mini-HOWTO

        Author: r...@spider.compart.fi (Riku Saikkonen)
        Last modified: Dec 26 1995

This document is Copyright 1995 Riku Saikkonen. See the normal Linux HOWTO
COPYRIGHT for details.

This HOWTO document tells about programming hardware I/O ports and waiting
for small (microseconds to milliseconds) periods of time in user-mode Linux
C programs running on an Intel x86 processor. This document is a descendant
of the very small IO-Port mini-HOWTO by the same author.

If you have corrections or something to add, feel free to e-mail me
(r...@spider.compart.fi)...

Changes from the previous version (Nov 16 1995):
Lots, I haven't kept count. Added parallel port specification.

        I/O ports in C programs, the normal way

Routines for accessing I/O ports are in /usr/include/asm/io.h (or
linux/include/asm-i386/io.h in the kernel source distribution). The routines
there are inline macros, so it is enough to #include <asm/io.h>; you do not
need any additional libraries.

Because of a limitation in gcc (present at least in 2.7.0 and below), you
_have to_ compile any source code using these routines with optimisation
turned on (i.e. gcc -O). Because of another limitation in gcc, you cannot
compile with both optimisation and debugging (-g). This means that if you
want to use gdb on programs using I/O ports, it might be a good idea to put
the I/O port-using routines in a separate source file, and, when you debug,
compile that source file with optimisation and the rest with debugging.

Before you access any ports, you must give your program permission to do
that. This is done by calling the ioperm(2) function (declared in unistd.h,
and defined in the kernel) somewhere near the start of your program (before
any I/O port accesses). The syntax is ioperm(from,num,turn_on), where from
is the first port number to give access to, and num the number of
consecutive ports to give access to. For example, ioperm(0x300,5,1); would
give access to ports 0x300 through 0x304 (a total of 5 ports). The last
argument is a Boolean value specifying whether to give access to the program
to the ports (true (1)) or to remove access (false (0)). You may call
ioperm() multiple times to enable multiple non-consecutive ports. See the
ioperm(2) manual page for details on the syntax.

The ioperm() call requires your program to have root privileges; thus you
need to either run it as user root, or make it setuid root. You should be
able to (I haven't tested this; please e-mail me if you have) drop the root
privileges after you have called ioperm() to enable any ports you want to
use. You are not required to explicitly drop your port access privileges
with ioperm(...,0); at the end of your program, it is done automatically.

Ioperm() priviledges are transferred across fork()s and exec()s, and across
a setuid to a non-root user.

Ioperm() can only give access to ports 0x000 through 0x3ff; for higher
ports, you need to use iopl(2) (which gives you access to all ports at
once); I have not done this, so see the manual page for details. I suspect
the level argument 3 will be needed to enable the port access. Please e-mail
me if you have details on this.

Then, to actually accessing the ports... To input a byte from a port, call
inb(port);, it returns the byte it got. To output a byte, call outb(value,
port); (notice the order of the parameters). To input a word from ports x
and x+1 (one byte from each to form the word, just like the assembler
instruction INW), call inw(x);. To output a word to the two ports,
outw(value,x);.

The inb_p(), outb_p(), inw_p(), and outw_p() macros work otherwise
identically to the ones above, but they do a short (about one microsecond)
delay after the port access; you can make the delay four microseconds by
#defining REALLY_SLOW_IO before including asm/io.h. These macros normally
(unless you #define SLOW_IO_BY_JUMPING, which probably isn't accurate) use a
port output to port 0x80 for their delay, so you need to give access to
port 0x80 with ioperm() first (outputs to port 0x80 should not affect any
part of the system). For more versatile methods of delaying, read on.

Man pages for these macros are forthcoming in a future release of Linux
man-pages.

Troubleshooting:

Q1. I get segmentation faults when accessing ports!

A1. Either your program does not have root privileges, or the ioperm() call
    failed for some other reason. Check the return value of ioperm().

Q2. I can't find the in*(), out*() functions defined anywhere, gcc complains
    about unknown references!

A2. You did not compile with optimisation turned on (-O), and thus gcc could
    not resolve the macros in asm/io.h. Or you did not #include <asm/io.h>
    at all.

        An alternate method

Another way is to open /dev/port (a character device, major number 1, minor
4) for reading and/or writing (using the normal file access functions,
open() etc. - the stdio f*() functions have internal buffering, so avoid
them). Then seek to the appropriate byte in the file (file position 0 = port
0, file position 1 = port 1, and so on), and read or write a byte or word
from or to it. I have not actually tested this, and I am not quite sure if
it works that way or not; e-mail me if you have details.

Of course, for this your program needs read/write access to /dev/port. This
method is probably slower than the normal method above.

        Interrupts (IRQs) and DMA access

As far as I know, it is impossible to use IRQs or DMA directly from a
user-mode program. You need to make a kernel driver; see the Linux Kernel
Hacker's Guide (khg-x.yy) for details and the kernel source for examples.

        High-resolution timing: Delays

First of all, I should say that you cannot guarantee user-mode programs to
have exact control of timing because of the multi-tasking, pre-emptive
nature of Linux. Your process might be scheduled out at any time for
anything from about 20 milliseconds to a few seconds (on a system with very
high load). However, for most applications using I/O ports, this does not
really matter. To minimise this, you may want to nice your process to a
high-priority value.

There have been plans of a special real-time Linux kernel to support the
above discussed in comp.os.linux.development.system, but I do not know their
status; ask on that newsgroup. If you know more about this, e-mail me...

Now, let me start with the easier ones. For delays of multiple seconds, your
best bet is probably to use sleep(3). For delays of tens of milliseconds
(about 20 ms seems to be the minimum delay), usleep(3) should work. These
functions give the CPU to other processes, so CPU time isn't wasted. See the
manual pages for details.

For delays of under about 20 milliseconds (probably depending on the speed
of your processor and machine, and the system load), giving up the CPU
doesn't work because the Linux scheduler usually takes at least about 20
milliseconds before it returns control to your process. Due to this, in
small delays, usleep(3) usually delays somewhat more than the amount that
you specify in the parameters, and at least 20 ms.

For short delays (tens of us to a few ms or so), the easiest method is to
use udelay(), defined in /usr/include/asm/delay.h (linux/include/asm-i386/
delay.h). Udelay() takes the number of microseconds to delay (an unsigned
long) as its sole parameter, and returns nothing. It takes a few
microseconds more time than the parameter specifies because of the overhead
in the calculation of how long to wait (see delay.h for details).

To use udelay() outside of the kernel, you need to have the unsigned long
variable loops_per_sec defined with the correct value. As far as I know, the
only way to get this value from the kernel is to read /proc/cpuinfo for the
BogoMips value and multiply that by 500000 to get (an imprecise)
loops_per_sec.

For even shorter delays, there are a few methods. Outputting any byte to
port 0x80 (see above for how to do it) should wait for almost exactly 1
microsecond independent of your processor type and speed. You can do this
multiple times to wait a few microseconds. The port output should have no
harmful side effects on any standard machine (and some kernel drivers use
it). This is how {in|out}[bw]_p() normally do the delay (see asm/io.h).

If you know the processor type and clock speed of the machine the program
will be running on, you can hard-code shorter delays by running certain
assembler instructions (but remember, your process might be scheduled out at
any time, so the delays might well be longer every now and then). For the
table below, the internal processor speed determines the number of clock
cycles taken; e.g. for a 50 MHz processor (486DX-50 or 486DX2-50), one clock
cycle takes 1/50000000 seconds.

Instruction   i386 clock cycles   i486 clock cycles
nop                   3                   1
xchg %ax,%ax          3                   3
or %ax,%ax            2                   1
mov %ax,%ax           2                   1
add %ax,0             2                   1
[source: Borland Turbo Assembler 3.0 Quick Reference]
(sorry, I don't know about Pentiums; probably the same as the i486)
(I cannot find an instruction which would use one clock cycle on a i386)

The instructions nop and xchg in the table should have no side effects. The
rest may modify the flags register, but this shouldn't matter since gcc
should detect it.

To use these, call asm("instruction"); in your program. Have the
instructions in the syntax in the table above; ...

read more »

 
 
 

Linux IO Port Programming mini-HOWTO (part 1/1)

Post by Mr D » Wed, 17 Jan 1996 04:00:00


Do I need to call ioperm if I am trying to do inb/outb from an installable
driver module?
Does the -g cause segfaults when using an installable driver?

Thanks for the info!

Judah