[Top] | [Contents] | [Index] | [ ? ] |
Arla is a free AFS implementation from KTH.
Please send comments (and fixes) on this manual and the arla programs to arla-drinkers@stacken.kth.se.
1. Introduction Introduction to Arla. 2. AFS infrastructure A description of the afs infrastructure. 3. Organization of data How diffrent data are organized in AFS. 4. AFS and the real world Common problems and their solutions. 5. Parts of Arla Description of diffrent parts of arla. 6. Debugging How to debug arla when its not working. 7. Porting That you need to know to port arla. 10. Authors The authors of arla. 8. Oddities Strange things that have happen to us. 9. Arla timeline Short timeline of arla. A. Acknowledgments People that have helped us. B. Index
-- The Detailed Node Listing ---
1. Introduction 2. AFS infrastructure
How data and servers are organized in AFS.
3.1 Requirements 3.3 Volume 3.7 Callbacks 3.8 Volume management 3.9 Relationship between pts uid and unix uid
How to cope with reality
4.1 NAT 4.2 Samba 4.3 Integration with Kerberos 4.4 Kerberos tickets and AFS tokens
The parts of arla
How does arla work The relation between Arlad and XFS 5.1 The life of a file Tools and libs 5.4 The files in arlad/ pioctl and kafs
How to debug arla when its not working
6.1 Arlad 6.2 Debugging LWP with GDB 6.3 xfs 6.4 xfs on linux 6.5 Debugging techniques 6.6 Kernel debuggers 6.7 Darwin/MacOS X
Porting arla
7. Porting 7.1 user-space 7.2 XFS
Odd stuff you find when looking around
8. Oddities
Miscellaneous
9. Arla timeline 10. Authors A. Acknowledgments
B. Index
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Caution: Parts of this package are not yet stable software. If something doesn't work, it's probably because it doesn't. If you don't have backup of your data, take backup.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Arla is a free AFS implementation. Some of the goals are:
This release is known to work on the following platforms: NetBSD, OpenBSD, FreeBSD, Linux, Solaris, Darwin/MacOS X.
Earlier releases are known to work on current or earlier versions of the following platforms: SunOS, AIX, IRIX, Digital UNIX. Some fixes might be necessary to make Arla work.
There is or has been done work to support the following platforms: HPUX, Fujitsu UXP/V. Some development is necessary to make Arla work.
There is work going on to support the following platform: Windows NT/2000. Contributions are very welcome.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Arla has the following features (quality varies between stable and not implemented):
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you find bugs in this software, make sure it is a genuine bug and not just a part of the code that isn't implemented.
Bug reports should be sent to arla-drinkers@stacken.kth.se. Please
include information on what machine and operating system (including
version) you are running, what you are trying to do, what happens, what
you think should have happened, an example for us to repeat, the output
you get when trying the example, and a patch for the problem if you have
one. Please make any patches with diff -u
or diff -c
.
Suggestions, comments and other non bug reports are also welcome.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are two mailing lists with talk about Arla. arla-announce@stacken.kth.se is a low-volume announcement list, while arla-drinkers@stacken.kth.se is for general discussion.
There is also commit list arla-commit@stacken.kth.se. Send a message to LIST-request@stacken.kth.se to subscribe.
The list are achived on http://www.stacken.kth.se/lists/.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This is an overview of the AFS infrastructure as viewed from a Transarc perspective, since most people still run Transarc cells.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
AFS filespace is split up in smaller parts called cells. These cells are usually listed under `/afs'. A cell is usually a whole organization or an adminstative unit within an organization. An example is e.kth.se (with the path `/afs/e.kth.se'), that is the department of electrical engineering at KTH, which obviously has the `e.kth.se' domain in DNS. Using DNS domains for cell names is the typical and most convenient way.
Note that cell names are written in lowercase by convention.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
All cells (and their db-servers) in the AFS world are listed in a file named `CellServDB'. There is a central copy that is maintained by Transarc at `/afs/transarc.com/service/etc/CellServDB'.
In spite of being organized in IPnumber - name pairs, where the name parts resemble comments, both values are used by Transarc software and confusion may arise if they are not synchronized with each other.
>e.kth.se # Royal Institute of Technology, Elektro 130.237.48.8 #sonen.e.kth.se. 130.237.48.7 #anden.e.kth.se. 130.237.48.244 #fadern.e.kth.se. |
Again, please note that the text after the # in the cell-name is a comment, but the hostnames after the # on the rows of an IP-address is not a comment. The host and the ip-address needs to point at the same computer.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In addition Arla can use DNS to find the db-servers of a cell. The DNS resource record that is used is the `AFSDB'. The resourcerecord was created by Transarc but have never been implemeted in released software.
`AFSDB' tells you what machines are db servers for a particular cell. The `AFSDB' resourcerecord is also used for DCE/DFS. An example (the 1 means AFS, 2 is used for DCE):
e.kth.se. IN AFSDB 1 fadern.e.kth.se. e.kth.se. IN AFSDB 1 sonen.e.kth.se. e.kth.se. IN AFSDB 1 anden.e.kth.se. |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Some cells use the abbreviated version `/afs/<word-before-first-dot>' (in the example above that would be `/afs/e/'. This might be convenient when typing them, but is a bad idea, because it does not create the same name space everywhere. If you create a symbolic link to `/afs/e/foo/bar', it will not work for people in other cells.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are several servers running in an AFS cell. For performance and redundancy reasons, these servers are often run on different hosts. There is a built in hierarchy within the servers (in two different dimensions).
There is one server that keeps track of the other servers within a host, restart them when they die, make sure they run in the correct order, save their core-files when they crash, and provide an interface for the sysadmin to start/stop/restart the servers. This server is called bos-server (Basic Overseer Server).
Another hierarchy is the one who keeps track of data (volumes, users, passwords, etc) and who is performing the real hard work (serving files) There is the the database server that keeps the database (obviously), and keeps several database copies on different hosts relpicated with Ubik (see below). The fileserver and the client software (like the afsd/arlad, pts and, vos) are pulling meta-data out of the dbserver to find where to find user-privileges and where volumes resides.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Bos server is making sure the servers are running. If they crash, it saves the corefile, and starts a new server. It also makes sure that servers/services that are not supposted to run at the same time do not. An example of this is the fileserver/volserver and salvage. It would be devastating if salvage tried to correct data that the fileserver is changing. The salvager is run before the fileserver starts. The administrator can also force a file server to run through salvage again.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Ubik is a distributed database. It is really a (distributed) flat file that you can perform read/write/lseek operation on. The important property of Ubik is that it provides a way to make sure that updates are done once (transactions), and that the database is kept consistent. It also provides read-only access to the database when there is one (or more) available database-server(s).
This works the following way: A newly booted server sends out a message to all other servers that tells them that it believes that it is the new master server. If the server gets a notice back from an other server that tells it that the other server believes that it (or a third server) is the master, depending on how long it has been masterserver it will switch to the new server. If they can't agree, the one with the lowest ip-address is supposed to win the argument. If the server is a slave it still updates the database to the current version of the database.
A update to the database can only be done if more than half of the servers are available and vote for the master. A update is first propaged to all servers, then after that is done, and if all servers agree with the change, a commit message is sent out from the server, and the update is written to disk and the serial number of the database is increased.
All servers in AFS use Ubik to store their data.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The vldb-server is resposible for the information on what fileserver every volume resides and of what kind of volumes exists on each fileserver.
To confuse you even more there are three types of support for the clients. Basically there is AFS 3.3, 3.4, and 3.6 support. The different interfaces look the same for the system administrator, but there are some important differences.
AFS 3.3 is the classic interface. 3.4 adds the possibility of multihomed servers for the client to talk to, and that introduces the N interface. To deal with multihomed clients AFS 3.5 was introduced. This is called call the U interface. The name is due to how the functions are named.
The N interface added more replication-sites in the database-entry structure. The U interface changed the server and clients in two ways.
When a 3.5 server boot it registers all its ip-addresses. This means that a server can add (or remove) an network interface without rebooting. When registering at the vldb server, the file server presents itself with an UUID, an unique identifier. This UUID will be stored in a file so the UUID keeps constant even when network addresses are changed, added, or removed.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The protection server keeps track of all users and groups. It's used a lot by the file servers. Users can self create, modify and delete groups.
When a fileserver is access they are durring the authentication giving the name of the client. This name if looked up in the protection-database via the protection server that returns the id of the user and all the groups that the user belongs too.
This information is used when to check if the user have access to a particular file or directory. All files created by the user are assigned the user id that the protectionserver returned.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The kaserver is a Kerberos server, but in other clothes. There is a new RPC interface to get tickets (tokens) and administer the server. The old Kerberos v4 interface is also implemented, and can be used by ordinary Kerberos v4 clients.
You can replace this server with an Heimdal kdc, since it provides a superset of the functionality.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The backup server keeps the backup database that is used when backing up and restoring volumes. The backup server is not used by other servers, only operators.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
With the update server its possible to automagicly update configuration files, server binaries. You keep masters that are supposed to contain the correct copy of all the files and then other servers can fetch them from there.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The file server serves data to the clients, keeps track of callbacks, and breaks callbacks when needed. Volser is the administative interface where you add, move, change, and delete volumes from the server.
The volume server and file server are ran at the same time and they sync with each other to make sure that fileserver does not access a volume that volser is about to modify.
Every time a fileserver is started it registers it IP addresses with the vldbserserver using the VL_RegisterAddrs rpc-call. As the unique identifier for itself it uses its afsUUID.
The afsUUID for a fileserver is stored in /usr/afs/local/sysid. This is the reson you must not clone a server w/o removing the sysid file. Otherwise the new filserver will register as the old one and all volumes on the old fileserver are pointed to the new one (where the probably doesn't exist).
The fileserver doesn't bind to a specific interface (read address), gets all packets that are destined for port 7000 (afs-fileserver/udp). All outgoing packets are send on the same socket, and means that your operatingsystem will choose the source-address of the udp datagram.
This have the side-effect that you will have asymmetric routing on mulithomed fileserver for 3.4 (and older) compatible clients if they don't use the closest address when sorting the vldb entry. Arla avoids this problem.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Salvage is not a real server. It is run before the fileserver and volser are started to make sure the partitions are consistent.
It's imperative that salvager is NOT run at the same time as the fileserver/volser is running.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Fileserver, volumeserver, and salvage are all in one program.
There is no bu nor ka-server. The ka-server is replaced by kth-krb or Heimdal. Heimdal's kdc even implements a ka-server readonly interface, so your users can keep using programs like klog.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter how data is stored and how AFS diffrent from, for example, NFS. It also describes how data kept consistent and what the requirements was and how that inpacted on the design.
3.1 Requirements 3.3 Volume 3.7 Callbacks 3.8 Volume management 3.9 Relationship between pts uid and unix uid
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It should be possible to use AFS with hundred-thousands of users without problems.
Writes that are done to diffrent parts of the filesystem should not affect each other. It should be possible to distribute out the reads and writes over many file-servers. So if you have a file that is accessed by many clients, it should be possible to distribute out the load.
If there is multiple writes to the same file, are you sure that isn't a database.
Users should not need to know where their files are stored. It should be possible to move their files while they are using their files.
It should be easy for a administrator to make changes to the filesystem. For example to change quota for a user or project. It should also be possible to move the users data for a fileserver to a less loaded one, or one with more diskspace available.
Some benefits of using AFS is:
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
AFS isn't constructed for storing databases. It would be possible to use AFS for storing a database if a layer above provided locking and synchronizing of data.
One of the problems is that AFS doesn't include mandatory byte-range locks. AFS uses advisory locking on whole files.
If you need a real database, use one, they are much more efficent on solving a database problem. Don't use AFS.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A volume is a unit that is smaller then a partition. Its usually (should be) a well defined area, like a user's home directory, a project work area, or a program distribution.
Quota is controlled on volume-level. All day-to-day management are done on volumes.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In AFS a partition is what normally is named a partition. All partions that afs isusing is named a special way, `/vicepNN', where NN is ranged from a to z, continuing with aa to zz. The fileserver (and volser) automaticly picks upp all partition starting with `/vicep'
Volumes are stored in a partition. Volumes can't overlap partitions. Partitions are added when the fileserver is created or when a new disk is added to a filesystem.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A clone of volume is often needed for the volume operations. A clone is copy-on-write copy of a volume, the clone is the read-only version.
A two special versions of a clone is the read-only volume and the backup volume. The read-only volume is a snapshot of a read-write volume (that is what a clone is) that can be replicated to several fileserver to distribute the load. Each fileserver plus partition where the read-only is located is called a replication-site.
The backup volume is a clone that typically is made (with vos
backupsys
) each night to enable the user to retrieve yestoday's data
when they happen to remove a file. This is a very useful feature, since
it lessen the load on the system-administrators to restore files from
backup. The volume is usually mounted in the root user's home directory
under the name OldFiles. A special feature of the backup volume is that
inside it you can't follow mountpoint.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The volumes are independent of each other. To clue the together there is a `mountpoint's. Mountpoints are really symlink that is formated a special way that points out a volume (and a optional cell). A AFS-cache-manager will show a mountpoint as directory, in fact it will be the root directory of the target volume.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Callbacks are what enable the AFS-cache-manager to keep the files without asking the server if there is newer version of the file.
A callback is a promise from the fileserver that it will notify the client if the file (or directory) changes within the timelimit of the callback.
For read-only callbacks there is only callback given its called a volume callback and it will be broken when the read-only volume is updated.
The time range of callbacks range from 1 hour to 5 minutes depending of how many user of the file exists.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
All volume managment is done with the vos-command. To get a list of all commands `vos help' can be used. For help on a specific vos subcommand, `vos subcommand -h' can be used.
vos create mim c HO.staff.lha.fluff -quota 400000 |
Volumes can be moved from a server to another, even when users are using the volume.
Read-only volumes can be replicated over several servers, they are first
added with vos addsite
, and the replicated with vos
release
out over the servers.
When you want to distribute out the changes in the readwrite volume.
Volumes can be removed
Note that you shouldn't remove the last readonly volume since this make clients misbehave. If you are moving the volume you should rather add a new RO to the new server and then remove it from the old server.
vos backup
and vos backupsys
creates the backup volume.
To stream a volume out to a `file' or `stdout' you use
vos dump
. The opposite command is named vos restore
.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
foo
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter tries to describe problems that you see in the real (not that perfect) world and show possible solutions to these problems.
4.1 NAT Truly evil stuff. 4.2 Samba Export AFS to Windows computers. 4.3 Integration with Kerberos How to integrate Kerberos with AFS. 4.4 Kerberos tickets and AFS tokens History and tools
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There's something evil out there that's called NAT, which stands for Network Address Translation. For whatever reasons, people are using it and will continue doing so.
First of all, it seemed like AFS should work just fine through NAT, you just appear to be coming from the NAT-address and some random port instead. Looking closer at different NAT implementations it seems like they have a rather short timeout:
If the client doesn't transmit any traffic to a particular host for that amount of time, it will get mapped to one of the IP address of the NAT-server (if you happen to run PAT, the port will be randomized too).
The authors of Rx realized that keeping a Rx connection associated with (IP-address,port) pair was a bad idea. One example is that you get problems with multi-homed hosts. So Rx keeps its own connection id data in the packet. With this feature client and server should be able to detect address changes.
Unfortunately, the use of the orignal Rx-code stops this from happening in Transarc/OpenAFS code. The code keeps track of incoming packets and keeps track of the right peer (client). But it never updates the IP-address,port pair in its data structure, so the answer packet will go to the old IP-address,port pair.
If you can control your NAT machine you can have static mapping for your AFS hosts (Transarc/OpenAFS uses source port 7000 and Arla uses source port 4711). You can try to use Natkeep http://mit.edu/fredette/www/natkeep/ if you run an old Arla or Transarc/OpenAFS client. From version 0.36 arla will have support for polling the servers at the right interval to prevent NAT from dropping information about your session.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The major problem when exporting the AFS filespace read-write to SMB (Windows fileshareing) using Samba is the transfer of the user token to the smb-server. The simple may is to use clear-text password between the Windows client and the samba-server, and then to get tokens for the user with this password. This solution is clearly not acceptable for security aware AFS administrators.
Describe here how to make AFS work "securely" with samba.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Kerberos 4 and 5 can be integrated quite well with AFS. This is mainly due to the fact that the security model used in AFS is Kerberos. The kaserver is a Kerberos 4 server with pre-authentication. The kaserver also provides a feature that limits the number of password retries, and after that you are locked out for half an hour. This feature can only be used in the ka interface as it requires pre-authentication, but since the kaserver provides a Kerberos 4 interface (without pre-authentication and without this limitation) it is quite worthless.
Many sites indeed use a kerberosserver instead of a kaserver. One of the reasons is that they want to use Kerberos 5 (which is required for Windows 2000).
More text here how to create a KeyFile, and describe TheseCells.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To further confuse the poor user, AFS and Kerberos programmers decided that they wanted to store their credentials at different places. In AFS, the kernel was a natural place to store the credentials (named token) since the CMU/Transarc AFS/OpenAFS implementation lives in the kernel. The Kerberos people on the other hand thought that storing the credentials (named ticket) in a file would be a good idea.
So know you have to synchronize the credentials if you just want to enter your password once. There are several tools that can do that for you. The question is what tools to use for what problem.
To add to the confusion not all tools talk to both Kerberos and kaservers. There is also a bogus user-id in the token that is supposed to be the same as your pts-user-id. Not that it makes any difference, but some people get confused when unknown numbers show up in the token. The easily confused people are often the ones that have more than one principal in the same realm/cell (read sysadmins).
If you want to get your ticket from your Kerberos server, you use
kinit
, and then use afslog
or aklog
to get AFS
tokens and push them to the kernel (and AFS daemon). Some kinit
(and kauth
) can do both for you, use kinit --afslog
or
simply kauth
. Note that kinit
and kauth
don't
get set your AFS-token user-id right, and thus can be confusing
for people who think that this is important.
The klog
program that you get with Transarc/OpenAFS talks to the
kaserver and behaves just-right in the sense that it talks to the pts
server to get the AFS-token user-id right, but klog
talks only to
the kaserver which will not work for people with a Kerberos server.
Klog
in Arla was written by Chris Wing
wingc@engin.umich.edu as a part of a packet called
afsutils
, they did the right thing and talked to the pts-server
to get the user-id. However, it uses Kerberos libs to talk to the
server. These libraries require the files `/etc/krb.conf' and
`/etc/krb.realms' to be setup correctly for their cell/realm. Not
that easy.
A long time ago Ken Hornstein kenh@cmf.nrl.navy.mil wrote the
AFS Migration Kit that helped you to migrate from AFS to MIT Kerberos 5.
It included a tool named aklog that could convert a Kerberos tickets to
tokens. This tool was also rewritten in Arla by Brandon S. Allbery
allbery@ece.cmu.edu. aklog
can't get you new
credentials, it just converts old ones to new ones.
Then Transarc decided that they needed to fix a security hole in their kaserver, and while doing that, they managed to break a part in the kaserver so it ceased to work for kerberos requests.
First the defect existed unnoticed for a long time, later Transarc has
not managed to distribute a working version of the kaserver. Due to this,
a lot of sites run a kaserver with this defect. Instead of installing
working authentification servers from another sources, people started to
whine again and Love lha@stacken.kth.se wrote the tool
kalog
that talked the ka-protocol (but didn't do the AFS user-id
right) to work around that problem.
All tools that use Kerberos 4 need a working `/etc/krb.conf' and `/etc/krb.realms'. Kerberos 5 programs need `/etc/krb5.conf'. AFS aware tools need `/usr/arla/etc/CellServDB' or `/usr/vice/etc/CellServDB'.
Also the Kerberos implementations from KTH (kth-krb and Heimdal) include AFS support to make your life more pleasant. One thing is that you can have a file `$HOME/.TheseCells' that lists the cells you use and the Kerberos tools will try to get tickes and tokens for those cells. Heimdal contains support for converting a Kerberos 4 srvtab to an AFS KeyFile.
Below is a table that describes what tools does what, what inheritance(s) they have, and what protocol(s) they speak. From the inheritance (also in a table below) it is possible to deduct what configuration files the tools use.
Tool | Inheritance | Protocol | Produces |
Transarc/OpenAFS klog | afs authlib | KA | Ticket and tokens |
Arla klog | Kerberos and libkafs | Kerberos | Ticket and tokens |
AFS Migration kit's aklog | MIT Kerberos and Ken Hornstein's afslib | Kerberos | Converts Kerberos tickets to tokens |
Arla's aklog | Kerberos and libkafs | Kerberos | Converts Kerberos tickets to tokens |
kth-krb's and Heimdal's afslog | Kerberos and libkafs | Kerberos | Converts Kerberos tickets to tokens |
kalog | arla and libkafs | KA | Get initial ticket, store tokens and tickets |
Inheritance table
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Caution: This text just tries to give a general picture. For real info read the code. If you have any questions, mail arla-drinkers@stacken.kth.se.
How does arla work The relation between Arlad and XFS 5.1 The life of a file Tools and libs 5.4 The files in arlad/ pioctl and kafs
Arla consists of two parts, a userland process (arlad) and the kernel-module (xfs).
Arlad is written in user-space for simpler debugging (and less rebooting). As a uset space program arlad does not have the same limitations as if it would be written in the kernel. To avoid performance loss as much as possible, xfs is caching data.
xfs and arlad communicate with each other via a char-device-driver. There is a rpc-protocol currenly used specially written for this (`arlad/message.c')
xfs is written to be as simple as possible. Theoretically, xfs could be used by other user-space daemons to implement a file system. Some parts, such as syscalls, are arla-specific. These parts are designed to be as general as possible.
For example, xfs does not recognize which pioctl the user-level program calls, it just passes this information on to arlad.
Userland --------- Edit file | Arlad | ------> Network | --------- ----|-----------------|[1]---- ------- ------- Kernel | VFS | <--[2]--> | XFS | ------- ------- |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Step by step description of what happens during the creation of a file. The names are inspired of BSD-style VFS-layer but the idea is the same in most operating systems.
What other tools does the arla suite consists of
util/libutil.a
- A library for the most often used
rx/librx.a
- The library for the rx protocol
lwp/liblwp.a
- The library for the lwp thread-package
ydr/ydr
- A stub generator that replaces rxgen.
rxkad/librxkad.a
- The rx Kerberos authentication package.
lib/roken/libroken.a
- The library that will unbreak
lib/ko/libko.a
- A library of functions that are arlad-core
appl/lib/libarlalib.a
- A broken library that does all
appl/fs/fs
- The fs util, extra feature
appl/vos/vos
- The vos util.
appl/pts/pts
- The pts util, extra feature: dump.
appl/udebug/udebug
- Debug your ubik server.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Rx is run over UDP.
One of rxgen or ydr is used to generate stub-files, ydr is better since it generates prototypes, too.
The current implemetation of rx it not that beautiful.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
LWP is a preepmtive thread package. It does it's context-switching by creating a private stack for each thread. The heart of the package is select(2).
The stack is checked for overruns in context-switches, but that is often too late. It might be an idea to add a red zone at the top of the stack to be able to detect overruns.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This is a short describtion of the files to bring new deveplopers up to speed.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These are the files that contain operating specific functions. Today it's just conv_dir().
The pioctl interface is the only part of xfs that is afs related.
pioctl is a ioctl but called with a path instead of a filedescriptor.
When you probe if there is a live afsclient you first run
k_hasafs()
that probes if there is an afsclient around.
It also sets up some static variables in the library. So if you
start to do pioctl()
w/o running k_hasafs()
, you're
up to funny errors, and/or get a corefile.
k_hasafs()
does an AFSCALL_PIOCTL
with opcode
VIOCSETTOK
and insize == 0, ie you try to set a token
(ticket) that is 0 bytes long. This is cleary invalid and kafs
expects to find an EINVAL
returned from syscall(2)
.
The pioctl is used more then just for AFSCALL_PIOCTL
, an other
use is AFSCALL_SETPAG
(setting pag). It has also been in use for
setting xfs debugging levels.
When xfs discovers that a path is given in the pioctl()
it does a
VOP_LOOKUP
on the path and if the returned value is a vnode that
resides in afs then it extracts the xfs-handle for that node (that just
happens to be the VenusFid) and passes that on to arlad.
The only ugly thing about the current implentation is that the syscall code assumes that the arlad on "xfs-fd" is the arlad that should get this syscall.
An example of using pioctl()
:
int fs_getfilecellname(char *path, char *cell, size_t len) { struct ViceIoctl a_params; a_params.in_size=0; a_params.out_size=len; a_params.in=NULL; a_params.out=cell; if (k_pioctl(path,VIOC_FILE_CELL_NAME,&a_params,1) == -1) return errno; return 0; } int main (int argc, char **argv) { char cell[100]; if (!k_hasafs()) errx (1, "there is no afs"); if (fs_getfilecellname (".", cell, sizeof(cell))) errx (1, "fs_getfilecellname failed"); printf ("cell for `.' is %s", cell); return 0; } |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter of the manual includes tips that are useful when debugging arla.
Arla and xfs contains logging facilities that is quite useful when debugging when something goes wrong. This and some kernel debugging tips are described.
6.1 Arlad 6.2 Debugging LWP with GDB 6.3 xfs 6.4 xfs on linux 6.5 Debugging techniques 6.6 Kernel debuggers 6.7 Darwin/MacOS X
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If arlad is run without any arguments arlad will fork(2) and log to syslog(3). To disable forking use the --no-fork (-n) switch. In the current state of the code, arlad is allways to be started with the recover (-z) switch. This will invalidate your cache at startup. This restriction may be dropped in the future.
To enable more debuggning run arla with the switch --debug=module1,module2,... One useful combination is
--debug=all,-cleaner |
A convenient way to debug arlad is to start it inside gdb.
datan:~# gdb /usr/arla/libexec/arlad (gdb) run -z -n |
(gdb) bt |
To set the debugging with a running arlad use fs arladeb
as root.
datan:~# fs arladeb arladebug is: none datan:~# fs arladeb almost-all datan:~# |
By default, arlad logs through syslog if running as a daemon and to stderr when running in the foreground (with --no-fork).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
For easy tracing of threads we have patch (http://www.stacken.kth.se/projekt/arla/gdb-4.18-backfrom.diff) for gdb 4.18 (a new command) and a gdb sequence (think script).
The sequence only works for i386, but its just matter of choosing different offset in topstack to find $fp and $pc in the lwp_ps_internal part of the sequence.
You should copy the `.gdbinit' (that you can find in the arlad directory in the source-code) to your home-directory, the directory from where you startat the patched gdb or use flag -x to gdb.
Your debugging session might look like this:
(gdb) lwp_ps Runnable[0] name: IO MANAGER eventlist: fp: 0x806aac4 pc: 0x806aac4 name: producer eventlist: 8048b00 fp: 0x8083b40 pc: 0x8083b40 Runnable[1] [...] (gdb) help backfrom Print backtrace of FRAMEPOINTER and PROGRAMCOUNTER. (gdb) backfrom 0x8083b40 0x8083b40 #0 0x8083b40 in ?? () #1 0x8049e2f in LWP_MwaitProcess (wcount=1, evlist=0x8083b70) at /afs/e.kth.se/home/staff/lha/src/cvs/arla-foo/lwp/lwp.c:567 #2 0x8049eaf in LWP_WaitProcess (event=0x8048b00) at /afs/e.kth.se/home/staff/lha/src/cvs/arla-foo/lwp/lwp.c:585 #3 0x8048b12 in Producer (foo=0x0) at /afs/e.kth.se/home/staff/lha/src/cvs/arla-foo/lwp/testlwp.c:76 #4 0x804a00c in Create_Process_Part2 () at /afs/e.kth.se/home/staff/lha/src/cvs/arla-foo/lwp/lwp.c:629 #5 0xfffefdfc in ?? () #6 0x8051980 in ?? () |
There also the possibility to run arla with pthreads (run configure with --with-pthreads).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
XFS debugging does almost look the same on all platforms. They all share same debugging flags, but not all are enabled on all platforms.
Change the debugging with the fs xfsdebug command.
datan:~# fs xfsdebug xfsdebug is: none datan:~# fs xfsdebug almost-all datan:~# |
If it crashes before you have an opportunity to set the debug level, you will have to edit `xfs/your-os/xfs_deb.c' and recompile.
The logging of xfs ends up in your syslog. Syslog usully logs to /var/log or /var/adm (look in /etc/syslog.conf).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There is a problem with klogd, it's too slow. Cat the `/proc/kmsg' file instead. Remember to kill klogd, since the reader will delete the text from the ring-bufer, and you will only get some of the message in your cat.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Kernel debugging can sometimes force you to exercise your imagination. We have learned some different techniques that can be useful.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
On operatingsystems with kernel debugger that you can use probably find where in the kernel a user-program live, and thus deadlocks or trigger the bad event, that later will result in a bug. This is a problem, how do you some a process to find where it did the intresting thing when you can't set a kernel breakpoint ?
One way to be notified is to send a signal from the kernel module (psignal() on a BSD and force_sig() on linux). SIGABRT() is quite useful if you want to force a coredump. If you want to continue debugging, use SIGSTOP.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Make sure bugs don't get reintroduced.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Kernel debuggers are the most useful tool when you are trying to figure out what's wrong with xfs. Unfortunately they also seem to have their own life and do not always behave as expected.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Kernel debugging on NetBSD, OpenBSD, FreeBSD and Darwin are almost the same. You get the idea from the NetBSD example below:
(gdb) file netbsd.N (gdb) target kcore netbsd.N.core (gdb) symbol-file /sys/arch/i386/compile/NUTCRACKER/netbsd.gdb |
This example loads the kernel symbols into gdb. But this doesn't show the xfs symbols, and that makes your life harder.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you want to use the symbols of xfs, there is a gdb command called
`add-symbol-file' that is useful. The symbol file is obtained by
loading the kernel module xfs with `kmodload -o /tmp/xfs-sym'
(Darwin) or `modload' (NetBSD and OpenBSD). FreeBSD has a linker
in the kernel that does the linking instead of relying on `ld'. The
symbol address where the module is loaded get be gotten from
`modstat', `kldstat' or `kmodstat' (it's in the
area
field).
If you forgot the to run modstat/kldstat/kmodstat, you can extract the information from the kernel. In Darwin you look at the variable kmod (you might have to case it to a (kmod_info_t *). We have seen gdb loose the debugging info). kmod is the start of a linked list. Other BSDs have some variant of this.
You should also source the commands in /sys/gdbscripts (NetBSD), or System/xnu/osfmk/.gdbinit (Darwin) to get commands like ps inside gdb.
datan:~# modstat Type Id Off Loadaddr Size Info Rev Module Name DEV 0 29 ce37f000 002c ce388fc0 1 xfs_mod [...] [...] (gdb) add-symbol-table xfs.sym ce37f000 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
One of diffrencies between the BSD's are the proc
, a command that
enables you do to a backtrace on all processes. On FreeBSD you give the
proc
command a `pid', but on NetBSD and OpenBSD you give a
pointer to a struct proc
.
After you have ran proc
to set the current process, you can
examine the backtrace with the regular backtrace
command.
Darwin does't have a proc
command. Instead you are supposed to
use gdb sequences (System/xnu/osfmk/.gdbinit) to print process stacks,
threads, activations, and other information.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You can't get a crashdump for linux with patching the kernel. There are two projects have support for this. Mission Critical Linux http://www.missioncritiallinux.com and SGI http://oss.sgi.com/.
Remember save the context of /proc/ksyms before you crash, since this is needed to figure out where the xfs symbols are located in the kernel.
But you can still use the debugger (or objdump) to figure out where in the binary that you crashed.
ksymoops
can be used to create a backtrace.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Adb is not a symbolic debugger, this means that you have to read the disassembled object-code to figure out where it made the wrong turn and died. You might be able to use GNU objdump to list the assembler and source-code intertwined (`objdump -S -d mod_xfs.o'). Remember that GNU binutils for sparc-v9 isn't that good yet.
You can find the script that use use for the adb command `$<' in `/usr/lib/adb' and `/usr/platform/PLATFORNAME/adb'.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
An important thing to know is that you can debug a live kernel too, this can be useful to find dead-locks. To attach to a kernel you use a command like this on a BSD system (that is using gdb):
(gdb) file /netbsd (gdb) target kcore /dev/mem (gdb) symbol-file /sys/arch/i386/compile/NUTCRACKER/netbsd.gdb |
And on Solaris:
# adb -k /dev/ksyms /dev/mem |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Most diagnosics tools like ps, dmesg, and pstat on BSD systems used to look in kernel memory to extract information (and thus earned the name kmem-groovlers). On some systems they have been replaced with other method of getting their data, like /proc and sysctl.
But due to their heritage they can still be used in with a kernel and coredump to extract information on some system.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You'll need two computers to debug arlad/xfs on darwin since the common way to debug is to use a remote kernel-debugger over IP/UDP.
First you need to publish the arp-address of the computer that you are going to crash.
We have not found any kernel symbols in MacOSX Public Beta, so you should probably build your own kernel. Use Darwin xnu kernel source with cvs-tag: Apple-103-0-1 (not xnu-103).
gdb xfs.out target remote-kdp add-symbol-table ... attach <host> |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The largest part of the work needed to port Arla to a new operating system is in porting xfs, as kernel programming always is harder, less portable and messier than user-space dito. Arla in test mode (arla-cli) should work without any porting on any system that's not very far away from Unix and that provides berkeley sockets (including cygwin32). The hard part is porting the XFS kernel module, and we will spent most of this text on how to do that.
7.1 user-space 7.2 XFS
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The user-space parts should work on basically any system that is reasonably Posix and has berkeley sockets. The build uses autoconf and should adapt itself to most forseeable circumstances. If it fails to consider something that is missing or not working on the particular OS you are porting to, hard-code it to make sure that is what is missing and then try to create an autoconf test for it. If you fail to do so, or have no autoconf experience, send us the patches anyway and tell us where you are having the problem.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The only thing that might take a little bit more effort in porting is the context-switch in the LWP user-level threads package. There are assembler versions for most of the common architectures in `lwp'. Part of the problem is getting this code assembled properly. There is unfortunately no easy and portable way of preprocessing and assembling code. There is a script `lwp/make-process.o.sh' that tries to do in some different ways, but it may fail for you. Next problem is that assembler syntax can vary a lot even on the same CPU. The source files are written in such a way that they should be acceptable to almost any syntax, but if it fails you have to find out what particular syntax has to be used and adapt the source file for that.
The more interesting problem is if there is no support for your CPU. The first thing to try then is the --with-pthreads option that uses the pthreads library. If that fails or you want LWP working you have to figure out enough details on your CPU to write two functions in assembler, `savecontext' and `returnto' that save and restore the processor context.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In theory, if stuff was documented well enough, you wouldn't need it. In practice it never is, so you find out interfaces specs and how stuff works by reading the source code. If you're unable to find source code for your OS, try finding source for the closest match. If your OS is based on BSD, try the appropriate version of BSD, for example.
You can usually gather quite a lot of information on the workings of the kernel by reading the includes files in `<sys/*.h>'.
Try to find out what other XFS port is most similar to your OS and start with that code.
You need to figure out how a few things work in your kernel:
That varies quite a lot but it's probably easy to figure out if you have the source code for some other loadable module. Sometimes you can get the kernel to add your cdev, system call and file system automatically but usually you have to write code in your `entry-point' to add these to the appropriate tables.
The kernel has a table of all known device drivers, ordered by major number. Some kernels have one for block devices and one for character devices and some have a common one. That entry usually consists of a number of function pointers that perform the operations (open, close, read, write, ...), and possible a name and some flags. It could look something like the following:
struct cdevsw { int (*d_open)(); int (*d_close)(); ... }; struct cdevsw cdevsw[]; |
These are then usually stored in a table `cdevsw' indexed by the major device number. If you're really lucky there's a new way to get the kernel to add your `struct cdevsw' to the global table when loading the module or a function that does the addition for you. Otherwise there might be functions for adding/removing devices to the global table. If not, you'll have to fallback on looking for a free slot in the table and putting your struct cdevsw there. In some cases, this is not stored in a table but then there'll be a way of adding entries to the new data structure so you don't need to worry about it.
This is quite similar to adding a new cdev but the table is usually
called sysent
instead.
Once again, quite similar in principle. The names of the structures tend to vary quite a lot more.
The structure vfsops contains function pointers for all of the file system operations. You need to figure out what operations you need to implement (usually at least mount, unmount, root, sync, and statfs).
The operations that are performed on files are vnode operations (usually stored in a struct vnodeops), and you need to figure which of these you need and how they should work. Also, which is not as explicit, how vnodes are supposed to be allocated and freed and such.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Arla have existed for quite some years.
Development started in fall 1993 by Björn Grönvall bg@nada.kth.se (with an rxkad implantation), he had a working read-only implementation in winter 1994. Quick followers was Assar assar@sics.se (at that time assar@pdc.kth.se>) and Johan Danielsson <joda@pdc.kth.se. The platform that was chosen was Sparc SunOS4 (the OS that NADA, KTH was using).
Some work was being done by Patrik Stymne patriks@e.kth.se in porting arla to Ultrix, but this work was never finished.
At this time there was no free rx, lwp or rxkad. A basic rx implementation was written, and the threading problem was solved by using pthreads.
The Arla development started to slow down around 11 April 1995.
In about Mar-Jun 1996 rx and lwp was released by Transarc, this was made possible by Jim Doyle jrd@bu.edu, and Derrick J. Brashear shadow@dementia.org.
In September 1997, an rxkad implementation was written by Björn. At the same time, a need for an AFS client for OpenBSD rose at the Stacken, the local computer club at KTH. Other free OS:es, as NetBSD, FreeBSD and Linux(primarily sparc) were also in need of AFS clients.
In TokKOM, a local communications system using LysKOM (http://www.lysator.liu.se/lyskom/), Assar suggested to some club members that it would be a nice thing to resume the arla development.
Some people suggested that it would be less trouble having someone with access to the Transarc AFS source code port the code to the relevent platforms. Assar then ported xfs to FreeBSD 2.2.x in notime (over the night), just to show the high portability.
People started to understand that arla was a concept that would work, and first out was Love Hörnqvist-Åstrand lha@stacken.kth.se to join. Development was primarily aimed at OpenBSD and NetBSD at the moment, and Arla lived for at least 2-3 weeks in /var/tmp on a host named yakko.stacken.kth.se.
Magnus Ahltorp map@stacken.kth.se joined shortly thereafter, spending the rest of the year reading about the Linux VFS, and after a while, Artur Grabowski art@stacken.kth.se also started to work on arla, concentrating on OpenBSD kernel stuff.
The first entry in ChangeLog is dated Fri Oct 24 17:20:40 1997. Around this time arla was given a CVS tree, to ease development. Now you could also mount the xfs-device and get the root-directory out of it.
The Linux port was done in a few weeks in the beginning of 1998. Only the Linux 2.0 kernel was supported at this time.
In April 1998 Assar hade a Arla paper presented at Freenix. Linux 2.1 support was written also written around this time. This was a major work since there was a lot of stuff that had changed (namely the dcache).
The first milko entry is dated Thu Oct 30 01:46:51 1997. Note that this milko in a sense "worked". You could get files out from it and store them.
There was from this point a lot of work being done and quite a lot of studies was "wasted". We learned a lot, but not the stuff we were expected to.
We added support for `dynroot' and `fake-mp' to prepare for Windows and Darwin/MacOSX support.
In Mars 2000 preliminary support for MacOS X/Darwin 1.0 was merged in by Magnus and Assar.
Around the same time there we hacked in support for Solaris 8 (beta2) There was also some work being done on Windows 2000 native driver at same time.
In June 2000 there was a presentation on MADE2000 in Gothenburg, Sweden.
In September 2000 MacOS X Beta was working.
This just includes some milestones, for more information se Changelog.* and NEWS files in the distribution.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Currently writing on arla are
Rhapsody xfs port was contributed by Alexandra Ellwood <lxs@MIT.EDU> Later, Rhapsody was renamed Darwin.
Disconnected code is written by:
For contributors, see A. Acknowledgments.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
lwp and rx are copyrighted by IBM. We're grateful to Derrick J Brashear shadow@dementia.org and Jim Doyle jrd@bu.edu for making them available.
the rxkad
implementation was written by Björn Grönvall
bg@sics.se and is also part of the kth-krb distribution.
Some of the files in `libroken' come from Berkeley by the way of NetBSD/FreeBSD
editline
was written by Simmule Turner and Rich Salz.
The code for gluing these together were written by ourselves.
Bugfixes, documentation, encouragement, and code has been contributed by:
If you have done something and are not mentioned here, please send mail to arla-drinkers@stacken.kth.se.
If you are mentioned here and have not contributed, that's because we expect you to.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Jump to: | A B C D F G K L M N P S T U V X |
---|
Jump to: | A B C D F G K L M N P S T U V X |
---|
[Top] | [Contents] | [Index] | [ ? ] |
[Top] | [Contents] | [Index] | [ ? ] |
1. Introduction
2. AFS infrastructure
3. Organization of data
4. AFS and the real world
5. Parts of Arla
6. Debugging
7. Porting
8. Oddities
9. Arla timeline
10. Authors
A. Acknowledgments
B. Index
[Top] | [Contents] | [Index] | [ ? ] |
Button | Name | Go to | From 1.2.3 go to |
---|---|---|---|
[ < ] | Back | previous section in reading order | 1.2.2 |
[ > ] | Forward | next section in reading order | 1.2.4 |
[ << ] | FastBack | previous or up-and-previous section | 1.1 |
[ Up ] | Up | up section | 1.2 |
[ >> ] | FastForward | next or up-and-next section | 1.3 |
[Top] | Top | cover (top) of document | |
[Contents] | Contents | table of contents | |
[Index] | Index | concept index | |
[ ? ] | About | this page |