A Guide to the TDF Specification

  1. i. Introduction
  2. 1. Portability
    1. 1.1. Portable Programs
    2. 1.2. Portability Problems
    3. 1.3. APIs and Portability
  3. 2. Introduction to TDF
    1. 2.1. Features of TDF
    2. 2.2. TDF Compilation Phases
    3. 2.3. Aspects of the TDF System
    4. 2.4. TDF and APIs
    5. 2.5. TDF and Conditional Compilation
  4. 3. SORTs and TOKENs
    1. 3.1. Token applications and first-class SORTs
    2. 3.2. Token definitions and declarations
    3. 3.3. A simple use of a TOKEN
    4. 3.4. Second class SORTs
  5. 4. CAPSULEs and UNITs
    1. 4.1. make_capsule and name-spaces
    2. 4.2. Definitions and declarations
    1. 5.1. Shapes
    2. 5.2. Alignments
    3. 5.3. Pointer and offset SHAPEs
    4. 5.4. Compound SHAPEs
    5. 5.5. BITFIELD alignments
  7. 6. Procedures and Locals
    1. 6.1. make_proc and apply_proc
    2. 6.2. make_general_proc and apply_general_proc
    3. 6.3. Defining and using locals
    4. 6.4. Heap storage
  8. 7. Control Flow within procedures
    1. 7.1. Unconditional flow
    2. 7.2. Conditional flow
    3. 7.3. Exceptional flow
  9. 8. Values, variables and assignments.
    1. 8.1. contents
    2. 8.2. assign
    3. 8.3. TRANSFER_MODE operations
    4. 8.4. Assigning and extracting bitfields
  10. 9. Operations
    1. 9.1. VARIETY and overflow
    3. 9.3. Division and remainder
    4. 9.4. change_variety
    5. 9.5. and, or, not, xor
    6. 9.6. Floating-point operations, ROUNDING_MODE
    7. 9.7. change_bitfield_to_int, change_int_to_bitfield
    8. 9.8. make_compound, make_nof, n_copies
  11. 10. Constants
    1. 10.1. _cond constructors
    2. 10.2. Primitive constant constructors
  12. 11. Tokens and APIs
    1. 11.1. Application programming interfaces
    2. 11.2. Linking to APIs
    3. 11.3. Target independent headers, unique_extern
    4. 11.4. Language programming interfaces
    5. 11.5. Namespaces and APIs
  13. 12. TDF transformations
    1. 12.1. Transformations as definitions
    2. 12.2. Examples of transformations
    3. 12.3. Programs with undefined values
  14. 13. TDF expansions of offsets
    1. 13.1. Offsets within compound types
    2. 13.2. Bitfield offsets
  15. 14. Models of the TDF algebra
    1. 14.1. Model for a 32-bit standard architecture
    2. 14.2. Model for machines like the iAPX-432
  16. cxxxv. Conclusion

First published .

Revision History


Merged TDF and Portability paper into the TDF Guide.


Moved out andfutils to standalone tools.


Moved out andfutils as a standalone package.


TenDRA 4.1.2 release.

i. Introduction

TDF is the name of the technology developed at DRA which has been adopted by the Open Software Foundation (OSF), Unix System Laboratories (USL), the European Community's Esprit Programme and others as their Architecture Neutral Distribution Format (ANDF). To date much of the discussion surrounding it has centred on the question, "How do you distribute portable software?". This paper concentrates on the more difficult question, "How do you write portable software in the first place?" and shows how TDF can be a valuable tool to aid the writing of portable software. Most of the discussion centres on programs written in C and is Unix specific. This is because most of the experience of TDF to date has been in connection with C in a Unix environment, and not because of any inbuilt bias in TDF.

This memo is intended to be a fairly detailed commentary on the specification of TDF, a kind of Talmud to the Torah. If it conflicts with the specification document, it is wrong. The aim is elucidate the various constructions of TDF, giving examples of usages both from the point of view of a producer of TDF and how it is used to construct programs on particular platforms using various installers or translators. In addition, some attempt is made to give the reasons why the particular constructions have been chosen. Most of the commentary is a distillation of questions and answers raised by people trying to learn TDF from the specification document.

Throughout this document, references like (S5.1) are headings in the TDF Specification. I use the term "compiling" or "producing" to mean the production of TDF from some source language and "translating" to mean making a program for some specific platform from TDF.

I use the first person where I am expressing my own opinions or preferences; these should not be taken as official opinions of DRA or the TenDRA team.

The discussion of portability is divided into two sections. Firstly some of the problems involved in writing portable programs are considered. The intention is not only to catalogue what these problems are, but to introduce ways of looking at them which will be important in the second section, where TDF is introduced. This deals with the TDF approach to portability.

1. Portability

  1. 1.1. Portable Programs
    1. 1.1.1. Definitions and Preliminary Discussion
    2. 1.1.2. Separation and Combination of Code
    3. 1.1.3. Application Programming Interfaces
    4. 1.1.4. Compilation Phases
  2. 1.2. Portability Problems
    1. 1.2.1. Programming Problems
    2. 1.2.2. Code Transformation Problems
    3. 1.2.3. Code Combination Problems
    4. 1.2.4. API Problems
  3. 1.3. APIs and Portability
    1. 1.3.1. Target Dependent Code
    2. 1.3.2. Making APIs Explicit
    3. 1.3.3. Choosing an API
    4. 1.3.4. Alternative Program Versions

We start by examining some of the problems involved in the writing of portable programs. Although the discussion is very general, and makes no mention of TDF, many of the ideas introduced are of importance in the §2.

1.1. Portable Programs

1.1.1. Definitions and Preliminary Discussion

Let us firstly say what we mean by a portable program. A program is portable to a number of machines if it can be compiled to give the same functionality on all those machines. Note that this does not mean that exactly the same source code is used on all the machines. One could envisage a program written in, say, 68020 assembly code for a certain machine which has been translated into 80386 assembly code for some other machine to give a program with exactly equivalent functionality. This would, under our definition, be a program which is portable to these two machines. At the other end of the scale, the C program:

#include <stdio.h>

	fputs("Hello world\n", stdout);

which prints the message, "Hello world", onto the standard output stream, will be portable to a vast range of machines without any need for rewriting. Most of the portable programs we shall be considering fall closer to the latter end of the spectrum - they will largely consist of target independent source with small sections of target dependent source for those constructs for which target independent expression is either impossible or of inadequate efficiency.

Note that we are defining portability in terms of a set of target machines and not as some universal property. The act of modifying an existing program to make it portable to a new target machine is called porting. Clearly in the examples above, porting the first program would be a highly complex task involving almost an entire rewrite, whereas in the second case it should be trivial.

1.1.2. Separation and Combination of Code

So why is the second example above more portable (in the sense of more easily ported to a new machine) than the first? The first, obvious, point to be made is that it is written in a high-level language, C, rather than the low-level languages, 68020 and 80386 assembly codes, used in the first example. By using a high-level language we have abstracted out the details of the processor to be used and expressed the program in an architecture neutral form. It is one of the jobs of the compiler on the target machine to transform this high-level representation into the appropriate machine dependent low-level representation.

The second point is that the second example program is not in itself complete. The objects fputs and stdout, representing the procedure to output a string and the standard output stream respectively, are left undefined. Instead the header stdio.h is included on the understanding that it contains the specification of these objects.

A version of this file is to be found on each target machine. On a particular machine it might contain something like:

typedef struct {
	int __cnt ;
	unsigned char *__ptr ;
	unsigned char *__base ;
	short __flag ;
	char __file ;
} FILE ;

extern FILE __iob[60];
#define stdout (&__iob[1])

extern int fputs(const char *, FILE *);

meaning that the type FILE is defined by the given structure, __iob is an external array of 60 FILE's, stdout is a pointer to the second element of this array, and that fputs is an external procedure which takes a const char * and a FILE * and returns an int. On a different machine, the details may be different (exactly what we can, or cannot, assume is the same on all target machines is discussed below).

These details are fed into the program by the pre-processing phase of the compiler. (The various compilation phases are discussed in more detail later - see .) This is a simple, preliminary textual substitution. It provides the definitions of the type FILE and the value stdout (in terms of __iob), but still leaves the precise definitions of __iob and fputs still unresolved (although we do know their types). The definitions of these values are not provided until the final phase of the compilation - linking - where they are linked in from the precompiled system libraries.

Note that, even after the pre-processing phase, our portable program has been transformed into a target dependent form, because of the substitution of the target dependent values from stdio.h. If we had also included the definitions of __iob and, more particularly, fputs, things would have been even worse - the procedure for outputting a string to the screen is likely to be highly target dependent.

To conclude, we have, by including stdio.h, been able to effectively separate the target independent part of our program (the main program) from the target dependent part (the details of stdout and fputs). It is one of the jobs of the compiler to recombine these parts to produce a complete program.

1.1.3. Application Programming Interfaces

As we have seen, the separation of the target dependent sections of a program into the system headers and system libraries greatly facilitates the construction of portable programs. What has been done is to define an interface between the main program and the existing operating system on the target machine in abstract terms. The program should then be portable to any machine which implements this interface correctly.

The interface for the "Hello world" program above might be described as follows : defined in the header stdio.h are a type FILE representing a file, an object stdout of type FILE * representing the standard output file, and a procedure fputs with prototype:

int fputs(const char *s, FILE *f);

which prints the string s to the file f. This is an example of an Application Programming Interface (API). Note that it can be split into two aspects, the syntactic (what they are) and the semantic (what they mean). On any machine which implements this API our program is both syntactically correct and does what we expect it to.

The benefit of describing the API at this fairly high level is that it leaves scope for a range of implementation (and thus more machines which implement it) while still encapsulating the main program's requirements.

In the example implementation of stdio.h above we see that this machine implements this API correctly syntactically, but not necessarily semantically. One would have to read the documentation provided on the system to be sure of the semantics.

Another way of defining an API for this program would be to note that the given API is a subset of the ANSI C standard. Thus we could take ANSI C as an "off the shelf" API. It is then clear that our program should be portable to any ANSI-compliant machine.

It is worth emphasising that all programs have an API, even if it is implicit rather than explicit. However it is probably fair to say that programs without an explicit API are only portable by accident. We shall have more to say on this subject later.

1.1.4. Compilation Phases

The general plan for how to write the extreme example of a portable program, namely one which contains no target dependent code, is now clear. It is shown in the compilation diagram in which represents the traditional compilation process. This diagram is divided into four sections. The left half of the diagram represents the actual program and the right half the associated API. The top half of the diagram represents target independent material - things which only need to be done once - and the bottom half target dependent material - things which need to be done on every target machine.

Target-independent program source Target-independent API description Target-dependent program compilation Target-dependent API implementation Portable program source .c/.cc Preprocess cpp Target-dependent program .i/.ii Compile cc Assembly source .s Assemble as Binary object .o Link ld Executable API Spec Conformance Implementation System headers .h System libraries .o/.a/.so
Figure 1. Traditional Compilation Phases

So, we write our target independent program (top left), conforming to the target independent API specification (top right). All the compilation actually takes place on the target machine. This machine must have the API correctly implemented (bottom right). This implementation will in general be in two parts - the system headers, providing type definitions, macros, procedure prototypes and so on, and the system libraries, providing the actual procedure definitions. Another way of characterising this division is between syntax (the system headers) and semantics (the system libraries).

The compilation is divided into three main phases. Firstly the system headers are inserted into the program by the pre-processor. This produces, in effect, a target dependent version of the original program. This is then compiled into a binary object file. During the compilation process the compiler inserts all the information it has about the machine - including the Application Binary Interface (ABI) - the sizes of the basic C types, how they are combined into compound types, the system procedure calling conventions and so on. This ensures that in the final linking phase the binary object file and the system libraries are obeying the same ABI, thereby producing a valid executable. (On a dynamically linked system this final linking phase takes place partially at run time rather than at compile time, but this does not really affect the general scheme.)

The compilation scheme just described consists of a series of phases of two types ; code combination (the pre-processing and system linking phases) and code transformation (the actual compilation phases). The existence of the combination phases allows for the effective separation of the target independent code (in this case, the whole program) from the target dependent code (in this case, the API implementation), thereby aiding the construction of portable programs. These ideas on the separation, combination and transformation of code underlie the TDF approach to portability.

1.2. Portability Problems

We have set out a scheme whereby it should be possible to write portable programs with a minimum of difficulties. So why, in reality, does it cause so many problems? Recall that we are still primarily concerned with programs which contain no target dependent code, although most of the points raised apply by extension to all programs.

1.2.1. Programming Problems

A first, obvious class of problems concern the program itself. It is to be assumed that as many bugs as possible have been eliminated by testing and debugging on at least one platform before a program is considered as a candidate for being a portable program. But for even the most self-contained program, working on one platform is no guarantee of working on another. The program may use undefined behaviour - using uninitialised values or dereferencing null pointers, for example - or have built-in assumptions about the target machine - whether it is big-endian or little-endian, or what the sizes of the basic integer types are, for example. This latter point is going to become increasingly important over the next couple of years as 64-bit architectures begin to be introduced. How many existing programs implicitly assume a 32-bit architecture?

Many of these built-in assumptions may arise because of the conventional porting process. A program is written on one machine, modified slightly to make it work on a second machine, and so on. This means that the program is "biased" towards the existing set of target machines, and most particularly to the original machine it was written on. This applies not only to assumptions about endianness, say, but also to the questions of API conformance which we will be discussing below.

Most compilers will pick up some of the grosser programming errors, particularly by type checking (including procedure arguments if prototypes are used). Some of the subtler errors can be detected using the -Wall option to the Free Software Foundation's GNU C Compiler (gcc) or separate program checking tools such as lint, for example, but this remains a very difficult area.

1.2.2. Code Transformation Problems

We now move on from programming problems to compilation problems. As we mentioned above, compilation may be regarded as a series of phases of two types : combination and transformation. Transformation of code - translating a program in one form into an equivalent program in another form - may lead to a variety of problems. The code may be transformed wrongly, so that the equivalence is broken (a compiler bug), or in an unexpected manner (differing compiler interpretations), or not at all, because it is not recognised as legitimate code (a compiler limitation). The latter two problems are most likely when the input is a high level language, with complex syntax and semantics.

Note that in all the actual compilation takes place on the target machine. So, to port the program to n machines, we need to deal with the bugs and limitations of n, potentially different, compilers. For example, if you have written your program using prototypes, it is going to be a large and rather tedious job porting it to a compiler which does not have prototypes (this particular example can be automated; not all such jobs can). Other compiler limitations can be surprising - not understanding the L suffix for long numeric literals and not allowing members of enumeration types as array indexes are among the problems drawn from my personal experience.

The differing compiler interpretations may be more subtle. For example, there are differences between ANSI and "traditional" C which may trap the unwary. Examples are the promotion of integral types and the resolution of the linkage of static objects.

Many of these problems may be reduced by using the "same" compiler on all the target machines. For example, gcc has a single C front end (C to RTL) which may be combined with an appropriate back end (RTL to target) to form a suitable compiler for a wide range of target machines. The existence of a single front end virtually eliminates the problems of differing interpretation of code and compiler quirks. It also reduces the exposure to bugs. Instead of being exposed to the bugs in n separate compilers, we are now only exposed to bugs in one half-compiler (the front end) plus n half-compilers (the back ends) - a total of (n + 1) / 2. (This calculation is not meant totally seriously, but it is true in principle.) Front end bugs, when tracked down, also only require a single workaround.

1.2.3. Code Combination Problems

If code transformation problems may be regarded as a time consuming irritation, involving the rewriting of sections of code or using a different compiler, the second class of problems, those concerned with the combination of code, are far more serious.

The first code combination phase is the pre-processor pulling in the system headers. These can contain some nasty surprises. For example, consider a simple ANSI compliant program which contains a linked list of strings arranged in alphabetical order. This might also contain a routine:

void index(char *);

which adds a string to this list in the appropriate position, using strcmp from string.h to find it. This works fine on most machines, but on some it gives the error:

Only 1 argument to macro 'index'

The reason for this is that the system version of string.h contains the line:

#define index(s, c) strchr(s, c)

But this is nothing to do with ANSI, this macro is defined for compatibility with BSD.

In reality the system headers on any given machine are a hodge podge of implementations of different APIs, and it is often virtually impossible to separate them (feature test macros such as _POSIX_SOURCE are of some use, but are not always implemented and do not always produce a complete separation; they are only provided for "standard" APIs anyway). The problem above arose because there is no transitivity rule of the form : if program P conforms to API A, and API B extends A, then P conforms to B. The only reason this is not true is these namespace problems.

A second example demonstrates a slightly different point. The POSIX standard states that sys/stat.h contains the definition of the structure struct stat, which includes several members, amongst them:

time_t st_atime;

representing the access time for the corresponding file. So the program:

#include <sys/types.h>
#include <sys/stat.h>

st_atime(struct stat *p)

should be perfectly valid - the procedure name st_atime and the field selector st_atime occupy different namespaces (see however the appendix on namespaces and APIs below). However at least one popular operating system has the implementation:

struct stat {
	union {
		time_t st__sec;
		timestruc_t st__tim;
	} st_atim;
#define st_atime st_atim.st__sec

This seems like a perfectly legitimate implementation. In the program above the field selector st_atime is replaced by st_atim.st__sec by the pre-processor, as intended, but unfortunately so is the procedure name st_atime, leading to a syntax error.

The problem here is not with the program or the implementation, but in the way they were combined. C does not allow individual field selectors to be defined. Instead the indiscriminate sledgehammer of macro substitution was used, leading to the problem described.

Problems can also occur in the other combination phase of the traditional compilation scheme, the system linking. Consider the ANSI compliant routine:

#include <stdio.h>

int open ( char *nm )
	int c, n = 0 ;

	FILE *f = fopen ( nm, "r" ) ;
	if ( f == NULL ) return ( -1 ) ;
	while ( c = getc ( f ), c != EOF )
		n++ ;
	( void ) fclose ( f ) ;
	return ( n ) ;

which opens the file nm, returning its size in bytes if it exists and -1 otherwise. As a quick porting exercise, I compiled it under six different operating systems. On three it worked correctly; on one it returned -1 even when the file existed; and on two it crashed with a segmentation error.

The reason for this lies in the system linking. On those machines which failed the library routine fopen calls (either directly or indirectly) the library routine open (which is in POSIX, but not ANSI). The system linker, however, linked my routine open instead of the system version, so the call to fopen did not work correctly.

So code combination problems are primarily namespace problems. The task of combining the program with the API implementation on a given platform is complicated by the fact that, because the system headers and system libraries contain things other than the API implementation, or even because of the particular implementation chosen, the various namespaces in which the program is expected to operate become "polluted".

1.2.4. API Problems

We have said that the API defines the interface between the program and the standard library provided with the operating system on the target machine. There are three main problems concerned with APIs. The first, how to choose the API in the first place, is discussed separately. Here we deal with the compilation aspects : how to check that the program conforms to its API, and what to do about incorrect API implementations on the target machine(s). API Checking

The problem of whether or not a program conforms to its API - not using any objects from the operating system other than those specified in the API, and not making any unwarranted assumptions about these objects - is one which does not always receive sufficient attention, mostly because the necessary checking tools do not exist (or at least are not widely available). Compiling the program on a number of API compliant machines merely checks the program against the system headers for these machines. For a genuine portability check we need to check against the abstract API description, thereby in effect checking against all possible implementations.

Recall from above that the system headers on a given machine are an amalgam of all the APIs it implements. This can cause programs which should compile not to, because of namespace clashes; but it may also cause programs to compile which should not, because they have used objects which are not in their API, but which are in the system headers. For example, the supposedly ANSI compliant program:

#include <signal.h>
int sig = SIGKILL ;

will compile on most systems, despite the fact that SIGKILL is not an ANSI signal, because SIGKILL is in POSIX, which is also implemented in the system signal.h. Again, feature test macros are of some use in trying to isolate the implementation of a single API from the rest of the system headers. However they are highly unlikely to detect the error in the following supposedly POSIX compliant program which prints the entries of the directory nm, together with their inode numbers:

#include <stdio.h>
#include <sys/types.h>
#include <dirent.h>

void listdir ( char *nm )
	struct dirent *entry ;

	DIR *dir = opendir ( nm ) ;
	if ( dir == NULL )
		return ;
	while ( entry = readdir ( dir ), entry != NULL ) {
		printf ( "%s : %d\n", entry->d_name, ( int ) entry->d_ino ) ;
	( void ) closedir ( dir ) ;
	return ;

This is not POSIX compliant because, whereas the d_name field of struct dirent is in POSIX, the d_ino field is not. It is however in XPG3, so it is likely to be in many system implementations.

The previous examples have been concerned with simply telling whether or not a particular object is in an API. A more difficult, and in a way more important, problem is that of assuming too much about the objects which are in the API. For example, in the program:

#include <stdio.h>
#include <stdlib.h>

div_t d = { 3, 4 } ;

int main ()
	printf ( "%d,%d\n", d.quot, d.rem ) ;
	return ( 0 ) ;

the ANSI standard specifies that the type div_t is a structure containing two fields, quot and rem, of type int, but it does not specify which order these fields appear in, or indeed if there are other fields. Therefore the initialisation of d is not portable. Again, the type time_t is used to represent times in seconds since a certain fixed date. On most systems this is implemented as long, so it is tempting to use ( t & 1 ) to determine for a time_t t whether this number of seconds is odd or even. But ANSI actually says that time_t is an arithmetic, not an integer, type, so it would be possible for it to be implemented as double. But in this case ( t & 1 ) is not even type correct, so it is not a portable way of finding out whether t is odd or even. API Implementation Errors

Undoubtedly the problem which causes the writer of portable programs the greatest headache (and heartache) is that of incorrect API implementations. However carefully you have chosen your API and checked that your program conforms to it, you are still reliant on someone (usually the system vendor) having implemented this API correctly on the target machine. Machines which do not implement the API at all do not enter the equation (they are not suitable target machines), what causes problems is incorrect implementations. As the implementation may be divided into two parts - system headers and system libraries - we shall similarly divide our discussion. Inevitably the choice of examples is personal; anyone who has ever attempted to port a program to a new machine is likely to have their own favourite examples. System Header Problems

Some header problems are immediately apparent because they are syntactic and cause the program to fail to compile. For example, values may not be defined or be defined in the wrong place (not in the header prescribed by the API).

A common example (one which I have to include a workaround for in virtually every program I write) is that EXIT_SUCCESS and EXIT_FAILURE are not always defined (ANSI specifies that they should be in stdlib.h). It is tempting to change exit (EXIT_FAILURE) to exit (1) because everyone knows that EXIT_FAILURE is 1. But this is to decrease the portability of the program because it ties it to a particular class of implementations. A better workaround would be:

#include <stdlib.h>
#define EXIT_FAILURE 1

which assumes that anyone choosing a non-standard value for EXIT_FAILURE is more likely to put it in stdlib.h. Of course, if one subsequently came across a machine on which not only is EXIT_FAILURE not defined, but also the value it should have is not 1, then it would be necessary to resort to #ifdef machine_name statements. The same is true of all the API implementation problems we shall be discussing : non-conformant machines require workarounds involving conditional compilation. As more machines are considered, so these conditional compilations multiply.

As an example of things being defined in the wrong place, ANSI specifies that SEEK_SET, SEEK_CUR and SEEK_END should be defined in stdio.h, whereas POSIX specifies that they should also be defined in unistd.h. It is not uncommon to find machines on which they are defined in the latter but not in the former. A possible workaround in this case would be:

#include <stdio.h>
#ifndef SEEK_SET
#include <unistd.h>

Of course, by including "unnecessary" headers like unistd.h the risk of namespace clashes such as those discussed above is increased.

A final syntactic problem, which perhaps should belong with the system header problems above, concerns dependencies between the headers themselves. For example, the POSIX header unistd.h declares functions involving some of the types pid_t, uid_t etc, defined in sys/types.h. Is it necessary to include sys/types.h before including unistd.h, or does unistd.h automatically include sys/types.h? The approach of playing safe and including everything will normally work, but this can lead to multiple inclusions of a header. This will normally cause no problems because the system headers are protected against multiple inclusions by means of macros, but it is not unknown for certain headers to be left unprotected. Also not all header dependencies are as clear cut as the one given, so that what headers need to be included, and in what order, is in fact target dependent.

There can also be semantic errors in the system headers : namely wrongly defined values. The following two examples are taken from real operating systems. Firstly the definition:

#define DBL_MAX 1.797693134862316E+308

in float.h on an IEEE-compliant machine is subtly wrong - the given value does not fit into a double - the correct value is:

#define DBL_MAX 1.7976931348623157E+308

Again, the type definition:

typedef int size_t ; /* ??? */

(sic) is not compliant with ANSI, which says that size_t is an unsigned integer type. (I'm not sure if this is better or worse than another system which defines ptrdiff_t to be unsigned int when it is meant to be signed. This would mean that the difference between any two pointers is always positive.) These particular examples are irritating because it would have cost nothing to get things right, correcting the value of DBL_MAX and changing the definition of size_t to unsigned int. These corrections are so minor that the modified system headers would still be a valid interface for the existing system libraries (we shall have more to say about this later). However it is not possible to change the system headers, so it is necessary to build workarounds into the program. Whereas in the first case it is possible to devise such a workaround:

#include <float.h>
#ifdef machine_name
#undef DBL_MAX
#define DBL_MAX 1.7976931348623157E+308

for example, in the second, because size_t is defined by a typedef it is virtually impossible to correct in a simple fashion. Thus any program which relies on the fact that size_t is unsigned will require considerable rewriting before it can be ported to this machine. System Library Problems

The system header problems just discussed are primarily syntactic problems. By contrast, system library problems are primarily semantic - the provided library routines do not behave in the way specified by the API. This makes them harder to detect. For example, consider the routine:

void *realloc ( void *p, size_t s ) ;

which reallocates the block of memory p to have size s bytes, returning the new block of memory. The ANSI standard says that if p is the null pointer, then the effect of realloc ( p, s ) is the same as malloc ( s ), that is, to allocate a new block of memory of size s. This behaviour is exploited in the following program, in which the routine add_char adds a character to the expanding array, buffer:

#include <stdio.h>
#include <stdlib.h>

char *buffer = NULL ;
int buff_sz = 0, buff_posn = 0 ;

void add_char ( char c )
	if ( buff_posn >= buff_sz ) {
		buff_sz += 100 ;
		buffer = ( char * ) realloc ( ( void * ) buffer, buff_sz * sizeof ( char ) ) ;
		if ( buffer == NULL ) {
			fprintf ( stderr, "Memory allocation error\n" ) ;
			exit ( EXIT_FAILURE ) ;
	buffer [ buff_posn++ ] = c ;
	return ;

On the first call of add_char, buffer is set to a real block of memory (as opposed to NULL) by a call of the form realloc ( NULL, s ). This is extremely convenient and efficient - if it was not for this behaviour we would have to have an explicit initialisation of buffer, either as a special case in add_char or in a separate initialisation routine.

Of course this all depends on the behaviour of realloc ( NULL, s ) having been implemented precisely as described in the ANSI standard. The first indication that this is not so on a particular target machine might be when the program is compiled and run on that machine for the first time and does not perform as expected. To track the problem down will demand time debugging the program.

Once the problem has been identified as being with realloc a number of possible workarounds are possible. Perhaps the most interesting is to replace the inclusion of stdlib.h by the following:

#include <stdlib.h>
#ifdef machine_name
#define realloc ( p, s )\
	( ( p ) ? ( realloc ) ( p, s ) : malloc ( s ) )

where realloc ( p, s ) is redefined as a macro which is the result of the procedure realloc if p is not null, and malloc ( s ) otherwise. (In fact this macro will not always have the desired effect, although it does in this case. Why (exercise)?)

The only alternative to this trial and error approach to finding API implementation problems is the application of personal experience, either of the particular target machine or of things that are implemented wrongly by many machines and as such should be avoided. This sort of detailed knowledge is not easily acquired. Nor can it ever be complete: new operating system releases are becoming increasingly regular and are on occasions quite as likely to introduce new implementation errors as to solve existing ones. It is in short a "black art".

1.3. APIs and Portability

We now return to our discussion of the general issues involved in portability to more closely examine the role of the API.

1.3.1. Target Dependent Code

So far we have been considering programs which contain no conditional compilation, in which the API forms the basis of the separation of the target independent code (the whole program) and the target dependent code (the API implementation). But a glance at most large C programs will reveal that they do contain conditional compilation. The code is scattered with #if's and #ifdef's which, in effect, cause the pre-processor to construct slightly different programs on different target machines. So here we do not have a clean division between the target independent and the target dependent code - there are small sections of target dependent code spread throughout the program.

Let us briefly consider some of the reasons why it is necessary to introduce this conditional compilation. Some have already been mentioned - workarounds for compiler bugs, compiler limitations, and API implementation errors; others will be considered later. However the most interesting and important cases concern things which need to be done genuinely differently on different machines. This can be because they really cannot be expressed in a target independent manner, or because the target independent way of doing them is unacceptably inefficient.

Efficiency (either in terms of time or space) is a key issue in many programs. The argument is often advanced that writing a program portably means using the, often inefficient, lowest common denominator approach. But under our definition of portability it is the functionality that matters, not the actual source code. There is nothing to stop different code being used on different machines for reasons of efficiency.

To examine the relationship between target dependent code and APIs, consider the simple program:

#include <stdio.h>

int main ()
#ifdef mips
	fputs ( "This machine is a mips\n", stdout ) ;
	return ( 0 ) ;

which prints a message if the target machine is a mips. What is the API of this program? Basically it is the same as in the Hello world example discussed in sections 2.1.1 and 2.1.2, but if we wish the API to fully describe the interface between the program and the target machine, we must also say that whether or not the macro mips is defined is part of the API. Like the rest of the API, this has a semantic aspect as well as a syntactic - in this case that mips is only defined on mips machines. Where it differs is in its implementation. Whereas the main part of the API is implemented in the system headers and the system libraries, the implementation of either defining, or not defining, mips ultimately rests with the person performing the compilation. (In this particular example, the macro mips is normally built into the compiler on mips machines, but this is only a convention.)

So the API in this case has two components : a system-defined part which is implemented in the system headers and system libraries, and a user-defined part which ultimately relies on the person performing the compilation to provide an implementation. The main point to be made in this section is that introducing target dependent code is equivalent to introducing a user-defined component to the API. The actual compilation process in the case of programs containing target dependent code is basically the same as that shown in . But whereas previously the vertical division of the diagram also reflects a division of responsibility - the left hand side is the responsibility of the programmer (the person writing the program), and the right hand side of the API specifier (for example, a standards defining body) and the API implementor (the system vendor) - now the right hand side is partially the responsibility of the programmer and the person performing the compilation. The programmer specifies the user-defined component of the API, and the person compiling the program either implements this API (as in the mips example above) or chooses between a number of alternative implementations provided by the programmer (as in the example below).

Let us consider a more complex example. Consider the following program which assumes, for simplicity, that an unsigned int contains 32 bits:

#include <stdio.h>
#include "config.h"

#ifndef SLOW_SHIFT
#define MSB ( a ) ( ( unsigned char ) ( a >> 24 ) )
#define MSB ( a ) *( ( unsigned char * ) &( a ) )
#define MSB ( a ) *( ( unsigned char * ) &( a ) + 3 )

unsigned int x = 100000000 ;

int main ()
	printf ( "%u\n", MSB ( x ) ) ;
	return ( 0 ) ;

The intention is to print the most significant byte of x. Three alternative definitions of the macro MSB used to extract this value are provided. The first, if SLOW_SHIFT is not defined, is simply to shift the value right by 24 bits. This will work on all 32-bit machines, but may be inefficient (depending on the nature of the machine's shift instruction). So two alternatives are provided. An unsigned int is assumed to consist of four unsigned char's. On a big-endian machine, the most significant byte is the first of these unsigned char's; on a little-endian machine it is the fourth. The second definition of MSB is intended to reflect the former case, and the third the latter.

The person compiling the program has to choose between the three possible implementations of MSB provided by the programmer. This is done by either defining, or not defining, the macros SLOW_SHIFT and BIG_ENDIAN. This could be done as command line options, but we have chosen to reflect another commonly used device, the configuration file. For each target machine, the programmer provides a version of the file config.h which defines the appropriate combination of the macros SLOW_SHIFT and BIG_ENDIAN. The person performing the compilation simply chooses the appropriate config.h for the target machine.

There are two possible ways of looking at what the user-defined API of this program is. Possibly it is most natural to say that it is MSB, but it could also be argued that it is the macros SLOW_SHIFT and BIG_ENDIAN. The former more accurately describes the target dependent code, but is only implemented indirectly, via the latter.

1.3.2. Making APIs Explicit

As we have said, every program has an API even if it is implicit rather than explicit. Every system header included, every type or value used from it, and every library routine used, adds to the system-defined component of the API, and every conditional compilation adds to the user-defined component. What making the API explicit does is to encapsulate the set of requirements that the program has of the target machine (including requirements like, I need to know whether or not the target machine is big-endian, as well as, I need fputs to be implemented as in the ANSI standard). By making these requirements explicit it is made absolutely clear what is needed on a target machine if a program is to be ported to it. If the requirements are not explicit this can only be found by trial and error. This is what we meant earlier by saying that a program without an explicit API is only portable by accident.

Another advantage of specifying the requirements of a program is that it may increase their chances of being implemented. We have spoken as if porting is a one-way process; program writers porting their programs to new machines. But there is also traffic the other way. Machine vendors may wish certain programs to be ported to their machines. If these programs come with a list of requirements then the vendor knows precisely what to implement in order to make such a port possible.

1.3.3. Choosing an API

So how does one go about choosing an API? In a sense the user-defined component is easier to specify than the system-defined component because it is less tied to particular implementation models. What is required is to abstract out what exactly needs to be done in a target dependent manner and to decide how best to separate it out. The most difficult problem is how to make the implementation of this API as simple as possible for the person performing the compilation, if necessary providing a number of alternative implementations to choose between and a simple method of making this choice (for example, the config.h file above). With the system-defined component the question is more likely to be, how do the various target machines I have in mind implement what I want to do? The abstraction of this is usually to choose a standard and widely implemented API, such as POSIX, which provides all the necessary functionality.

The choice of "standard" API is of course influenced by the type of target machines one has in mind. Within the Unix world, the increasing adoption of Open Standards, such as POSIX, means that choosing a standard API which is implemented on a wide variety Unix boxes is becoming easier. Similarly, choosing an API which will work on most MSDOS machines should cause few problems. The difficulty is that these are disjoint worlds; it is very difficult to find a standard API which is implemented on both Unix and MSDOS machines. At present not much can be done about this, it reflects the disjoint nature of the computer market.

To develop a similar point : the drawback of choosing POSIX (for example) as an API is that it restricts the range of possible target machines to machines which implement POSIX. Other machines, for example, BSD compliant machines, might offer the same functionality (albeit using different methods), so they should be potential target machines, but they have been excluded by the choice of API. One approach to the problem is the "alternative API" approach. Both the POSIX and the BSD variants are built into the program, but only one is selected on any given target machine by means of conditional compilation. Under our "equivalent functionality" definition of portability, this is a program which is portable to both POSIX and BSD compliant machines. But viewed in the light of the discussion above, if we regard a program as a program-API pair, it could be regarded as two separate programs combined on a single source code tree. A more interesting approach would be to try to abstract out what exactly the functionality which both POSIX and BSD offer is and use that as the API. Then instead of two separate APIs we would have a single API with two broad classes of implementations. The advantage of this latter approach becomes clear if wished to port the program to a machine which implements neither POSIX nor BSD, but provides the equivalent functionality in a third way.

As a simple example, both POSIX and BSD provide very similar methods for scanning the entries of a directory. The main difference is that the POSIX version is defined in dirent.h and uses a structure called struct dirent, whereas the BSD version is defined in sys/dir.h and calls the corresponding structure struct direct. The actual routines for manipulating directories are the same in both cases. So the only abstraction required to unify these two APIs is to introduce an abstract type, dir_entry say, which can be defined by:

typedef struct dirent dir_entry ;

on POSIX machines, and:

typedef struct direct dir_entry ;

on BSD machines. Note how this portion of the API crosses the system-user boundary. The object dir_entry is defined in terms of the objects in the system headers, but the precise definition depends on a user-defined value (whether the target machine implements POSIX or BSD).

1.3.4. Alternative Program Versions

Another reason for introducing conditional compilation which relates to APIs is the desire to combine several programs, or versions of programs, on a single source tree. There are several cases to be distinguished between. The reuse of code between genuinely different programs does not really enter the argument : any given program will only use one route through the source tree, so there is no real conditional compilation per se in the program. What is more interesting is the use of conditional compilation to combine several versions of the same program on the same source tree to provide additional or alternative features.

It could be argued that the macros (or whatever) used to select between the various versions of the program are just part of the user-defined API as before. But consider a simple program which reads in some numerical input, say, processes it, and prints the results. This might, for example, have POSIX as its API. We may wish to optionally enhance this by displaying the results graphically rather than textually on machines which have X Windows, the compilation being conditional on some boolean value, HAVE_X_WINDOWS, say. What is the API of the resultant program? The answer from the point of view of the program is the union of POSIX, X Windows and the user-defined value HAVE_X_WINDOWS. But from the implementation point of view we can either implement POSIX and set HAVE_X_WINDOWS to false, or implement both POSIX and X Windows and set HAVE_X_WINDOWS to true. So what introducing HAVE_X_WINDOWS does is to allow flexibility in the API implementation.

This is very similar to the alternative APIs discussed above. However the approach outlined will really only work for optional API extensions. To work in the alternative API case, we would need to have the union of POSIX, BSD and a boolean value, say, as the API. Although this is possible in theory, it is likely to lead to namespace clashes between POSIX and BSD.

2. Introduction to TDF

  1. 2.1. Features of TDF
    1. 2.1.1. Capsule Structure
    2. 2.1.2. Tokens
  2. 2.2. TDF Compilation Phases
    1. 2.2.1. API Description (Top Right)
    2. 2.2.2. Production (Top Left)
    3. 2.2.3. API Implementation (Bottom Right)
    4. 2.2.4. Installation (Bottom Left)
    5. 2.2.5. Illustrated Example
  3. 2.3. Aspects of the TDF System
    1. 2.3.1. The C to TDF Producer
    2. 2.3.2. C to TDF Mappings
    3. 2.3.3. TDF Linking
    4. 2.3.4. The TDF Installers
  4. 2.4. TDF and APIs
    1. 2.4.1. API Description
    2. 2.4.2. TDF Library Building
  5. 2.5. TDF and Conditional Compilation
    1. 2.5.1. User-Defined APIs
    2. 2.5.2. 3.5.2. User Defined Tokens - Example
    3. 2.5.3. 3.5.3. Conditional Compilation within TDF
    4. 2.5.4. Alternative Program Versions

Having discussed many of the problems involved with writing portable programs, we now eventually turn to TDF. Firstly a brief technical overview is given, indicating those features of TDF which facilitate the separation of program from API. Secondly the TDF compilation scheme is described. It is shown how the features of TDF are exploited to aid in the separation of target independent and target dependent code which we have indicated as characterising portable programs. Finally, the various constituents of this scheme are considered individually, and their particular roles are described in more detail.

2.1. Features of TDF

It is not the purpose of this paper to explain the exact specification of TDF - this is described elsewhere (see [6] and [4]) - but rather to show how its general design features make it suitable as an aid to writing portable programs.

TDF is an abstraction of high-level languages - it contains such things as exps (abstractions of expressions and statements), shapes (abstractions of types) and tags (abstractions of variable identifiers). In general form it is an abstract syntax tree which is flattened and encoded as a series of bits, called a capsule. This fairly high level of definition (for a compiler intermediate language) means that TDF is architecture neutral in the sense that it makes no assumptions about the underlying processor architecture.

The translation of a capsule to and from the corresponding syntax tree is totally unambiguous, also TDF has a "universal" semantic interpretation as defined in the TDF specification.

2.1.1. Capsule Structure

A TDF capsule consists of a number of units of various types. These are embedded in a general linkage scheme . Each unit contains a number of variable objects of various sorts (for example, tags and tokens) which are potentially visible to other units. Within the unit body each variable object is identified by a unique number. The linking is via a set of variable objects which are global to the entire capsule. These may in turn be associated with external names. For example, in , the fourth variable of the first unit is identified with the first variable of the third unit, and both are associated with the fourth external name (indicated).

external names variables unit body unit body unit body
Figure 2. TDF Capsule Structure

This capsule structure means that the combination of a number of capsules to form a single capsule is a very natural operation. The actual units are copied unchanged into the resultant capsule - it is only the surrounding linking information that needs changing. Many criteria could be used to determine how this linking is to be organised, but the simplest is to link two objects if and only if they have the same external name. This is the scheme that the current TDF linker has implemented. Furthermore such operations as changing an external name or removing it altogether ("hiding") are very simple under this linking scheme.

2.1.2. Tokens

So, the combination of program at this high level is straightforward. But TDF also provides another mechanism which allows for the combination of program at the syntax tree level, namely tokens. Virtually any node of the TDF tree may be a token: a place holder which stands for a subtree. Before the TDF can be decoded fully the definition of this token must be provided. The token definition is then macro substituted for the token in the decoding process to form the complete tree (see ).

tree containing tokens ( ) + = token definition = expanded tree
Figure 3. TDF Tokens

Tokens may also take arguments (see ). The actual argument values (from the main tree) are substituted for the formal parameters in the token definition.

tree containing tokens ( ) + = token definition with one formal argument ( ) = expanded tree
Figure 4. TDF Tokens (with arguments)

As mentioned above, tokens are one of the types of variable objects which are potentially visible to external units. This means that a token does not have to be defined in the same unit as it is used in. Nor do these units have originally to have come from the same capsule, provided they have been linked before they need to be fully decoded. Tokens therefore provide a mechanism for the low-level separation and combination of code.

2.2. TDF Compilation Phases

We have seen how one of the great strengths of TDF is the fact that it facilitates the separation and combination of program. We now demonstrate how this is applied in the TDF compilation strategy. This section is designed only to give an outline of this scheme. The various constituent phases are discussed in more detail later.

Again we start with the simplest case, where the program contains no target dependent code. The strategy is illustrated in , which should be compared with the traditional compilation strategy shown in . The general layout of the diagrams is the same. The left halves of the diagrams refers to the program itself, and the right halves to the corresponding API. The top halves refer to machine independent material, and the bottom halves to what happens on each target machine. Thus, as before, the portable program appears in the top left of the diagram, and the corresponding API in the top right.

The first thing to note is that, whereas previously all the compilation took place on the target machines, here the compilation has been split into a target independent (C to TDF) part, called production, and a target dependent (TDF to target) part, called installation . One of the synonyms for TDF is ANDF, Architecture Neutral Distribution Format, and we require that the production is precisely that - architecture neutral - so that precisely the same TDF is installed on all the target machines.

This architecture neutrality necessitates a separation of code. For example, in the "Hello world" example discussed in sections 2.1.1 and 2.1.2, the API specifies that there shall be a type FILE and an object stdout of type FILE *, but the implementations of these may be different on all the target machines. Thus we need to be able to abstract out the code for FILE and stdout from the TDF output by the producer, and provide the appropriate (target dependent) definitions for these objects in the installation phase.

Target-independent program production Target-independent API description Target-dependent program installation Target-dependent API implementation Portable program source .c/.cc Producer tdfc/tcpplus Target-independent TDF .j TDF linking tld Target-dependent TDF .t Install trans Assembly source .s Assemble as Binary object .o Link ld Executable API Spec Description tspec Conformance Implementation Target-independent API headers .h Library building API TDF libraries .tl System headers .h System libraries .o/.a/.so
Figure 5. TDF Compilation Phases

2.2.1. API Description (Top Right)

The method used for this separation is the token mechanism. Firstly the syntactic element of the API is described in the form of a set of target independent headers. Whereas the target dependent, system headers contain the actual implementation of the API on a particular machine, the target independent headers express to the producer what is actually in the API, and which may therefore be assumed to be common to all compliant target machines. For example, in the target independent headers for the ANSI C89 standard, there will be a file stdio.h containing the lines:

#pragma token TYPE FILE # c89.stdio.FILE
#pragma token EXP rvalue : FILE * : stdout # c89.stdio.stdout
#pragma token FUNC int ( const char *, FILE * ) : fputs # c89.stdio.fputs

These #pragma token directives are extensions to the C syntax which enable the expression of abstract syntax information to the producer. The directives above tell the producer that there exists a type called FILE, an expression stdout which is an rvalue (that is, a non-assignable value) of type FILE *, and a procedure fputs with prototype:

int fputs ( const char *, FILE * ) ;

and that it should leave their values unresolved by means of tokens (for more details on the #pragma token directive see [3]). Note how the information in the target independent header precisely reflects the syntactic information in the ANSI C89 API.

The names c89.stdio.FILE etc. give the external names for these tokens, those which will be visible at the outermost layer of the capsule; they are intended to be unique (this is discussed below). It is worth making the distinction between the internal names and these external token names. The former are the names used to represent the objects within C, and the latter the names used within TDF to represent the tokens corresponding to these objects.

See the Orientation Guide for details of how the target-independent API headers and the corresponding TDF libraries are generated.

2.2.2. Production (Top Left)

Now the producer can compile the program using these target independent headers. As will be seen from the "Hello world" example, these headers contain sufficient information to check that the program is syntactically correct. The produced, target independent, TDF will contain tokens corresponding to the various uses of stdout, fputs and so on, but these tokens will be left undefined. In fact there will be other undefined tokens in the TDF. The basic C types, int and char are used in the program, and their implementations may vary between target machines. Thus these types must also be represented by tokens. However these tokens are implicit in the producer rather than explicit in the target independent headers.

Note also that because the information in the target independent headers describes abstractly the contents of the API and not some particular implementation of it, the producer is in effect checking the program against the API itself.

2.2.3. API Implementation (Bottom Right)

Before the TDF output by the producer can be decoded fully it needs to have had the definitions of the tokens it has left undefined provided. These definitions will be potentially different on all target machines and reflect the implementation of the API on that machine.

The syntactic details of the implementation are to be found in the system headers. The process of defining the tokens describing the API (called TDF library building) consists of comparing the implementation of the API as given in the system headers with the abstract description of the tokens comprising the API given in the target independent headers. The token definitions thus produced are stored as TDF libraries, which are just archives of TDF capsules.

For example, in the example implementation of stdio.h given in section 2.1.2, the token c89.stdio.FILE will be defined as the TDF compound shape corresponding to the structure defining the type FILE (recall the distinction between internal and external names). __iob will be an undefined tag whose shape is an array of 60 copies of the shape given by the token c89.stdio.FILE, and the token c89.stdio.stdout will be defined to be the TDF expression corresponding to a pointer to the second element of this array. Finally the token c89.stdio.fputs is defined to be the effect of applying the procedure given by the undefined tag fputs. (In fact, this picture has been slightly simplified for the sake of clarity. See the section on C to TDF mappings in section 3.3.2.)

These token definitions are created using exactly the same C to TDF translation program as is used in the producer phase. This program knows nothing about the distinction between target independent and target dependent TDF, it merely translates the C it is given (whether from a program or a system header) into TDF. It is the compilation process itself which enables the separation of target independent and target dependent TDF.

In addition to the tokens made explicit in the API, the implicit tokens built into the producer must also have their definitions inserted into the TDF libraries. The method of definition of these tokens is slightly different. The definitions are automatically deduced by, for example, looking in the target machine's limits.h header to find the local values of CHAR_MIN and CHAR_MAX , and deducing the definition of the token corresponding to the C type char from this. It will be the variety (the TDF abstraction of integer types) consisting of all integers between these values. These tokens comprise a producer-specific “Language Programing Interface”, and are implemented as LPI TDF libraries (not illustrated) which are linked in alongside the API TDF libraries.

Note that what we are doing in the main library build is checking the actual implementation of the API against the abstract syntactic description. Any variations of the syntactic aspects of the implementation from the API will therefore show up. Thus library building is an effective way of checking the syntactic conformance of a system to an API. Checking the semantic conformance is far more difficult - we shall return to this issue later.

2.2.4. Installation (Bottom Left)

The installation phase is now straightforward. The target independent TDF representing the program contains various undefined tokens (corresponding to objects in the API), and the definitions for these tokens on the particular target machine (reflecting the API implementation) are to be found in the local TDF libraries. It is a natural matter to link these to form a complete, target dependent, TDF capsule. The rest of the installation consists of a straightforward translation phase (TDF to target) to produce a binary object file, and linking with the system libraries to form a final executable. Linking with the system libraries will resolve any tags left undefined in the TDF.

2.2.5. Illustrated Example

In order to help clarify exactly what is happening where, shows a simple example superimposed on the TDF compilation diagram. Selected relevant contents of TDF capsules are notated using tnc -p syntax. Specifics of the libc implementation are omitted for brevity and are indicated by an ellipsis.

( make_tokdec   c89.stdio.FILE   -   shape ) FILE token declaration ( make_tokdef   c89.stdio.FILE   -   shape ... ) FILE token definition ( make_var_tagdef f   - -   ( make_value     c89.stdio.FILE     ) ) f tag definition Target-independent program production Target-independent API description Target-dependent program installation Target-dependent API implementation tdfc a.c #include <stdio.h> FILE f; tld a.j trans a.t as a.s (assembler) ld a.o (binary) a.out (binary) ISO C90 “FILE is a type” tspec Conformance Implementation Library building tspec/.../stdio.h #pragma token \   TYPE FILE # FILE token declaration c89.tl ( make_tokdef   c89.stdio.FILE   -   shape ... ) FILE token definition /usr/include/stdio.h typedef ... FILE; libc.so (definition of FILE)
Figure 6. Example Compilation

The program to be translated is simply:

#include <stdio.h>
FILE f ;

and the API is as above, so that FILE is an abstract type. This API is described as target independent headers containing the #pragma token statements given above. The producer combines the program with the target independent headers to produce a target independent capsule which defines a tag f whose shape is given by the token representing FILE, but leaves this token undefined. In the API implementation, the local definition of the type FILE from the system headers is translated into the definition of this token by the library building process. Finally in the installation, the target independent capsule is combined with the local token definition library to form a target dependent capsule in which all the tokens used are also defined. This is then installed further as described above.

2.3. Aspects of the TDF System

Let us now consider in more detail some of the components of the TDF system and how they fit into the compilation scheme.

2.3.1. The C to TDF Producer

Above it was emphasised how the design of the compilation strategy aids the representation of program in a target independent manner, but this is not enough in itself. The C to TDF producer must represent everything symbolically; it cannot make assumptions about the target machine. For example, the line of C containing the initialisation:

int a = 1 + 1 ;

is translated into TDF representing precisely that, 1 + 1, not 2, because it does not know the representation of int on the target machine. The installer does know this, and so is able to replace 1 + 1 by 2 (provided this is actually true).

As another example, in the structure:

struct tag {
	int a ;
	double b ;
} ;

the producer does not know the actual value in bits of the offset of the second field from the start of the structure - it depends on the sizes of int and double and the alignment rules on the target machine. Instead it represents it symbolically (it is the size of int rounded up to a multiple of the alignment of double). This level of abstraction makes the tokenisation required by the target independent API headers is very natural. If we only knew that there existed a structure struct tag with a field b of type double then it is perfectly simple to use a token to represent the (unknown) offset of this field from the start of the structure rather than using the calculated (known) value. Similarly, when it comes to defining this token in the library building phase (recall that this is done by the same C to TDF translation program as the production) it is a simple matter to define the token to be the calculated value.

Furthermore, because all the producer's operations are performed at this very abstract level, it is a simple matter to put in extra portability checks. For example, it would be a relatively simple task to put most of the functionality of lint (excluding intermodular checking) or gcc's -Wall option into the producer, and moreover have these checks applied to an abstract machine rather than a particular target machine. Indeed a number of these checks have already been implemented.

These extra checks are switched on and off by using #pragma statements. (For more details on the #pragma syntax and which portability checks are currently supported by the producer see [3].) For example, ANSI C states that any undeclared function is assumed to return int, whereas for strict portability checking it is more useful to have undeclared functions marked as an error (indeed for strict API checking this is essential). This is done by inserting the line:

#pragma no implicit definitions

either at the start of each file to be checked or, more simply, in a start-up file - a file which can be #include'd at the start of each source file by means of a command line option.

Because these checks can be turned off as well as on it is possible to relax as well as strengthen portability checking. Thus if a program is only intended to work on 32-bit machines, it is possible to switch off certain portability checks. The whole ethos underlying the producer is that these portability assumptions should be made explicit, so that the appropriate level of checking can be done.

As has been previously mentioned, the use of a single front-end to any compiler not only virtually eliminates the problems of differing code interpretation and compiler quirks, but also reduces the exposure to compiler bugs. Of course, this also applies to the TDF compiler, which has a single front-end (the producer) and multiple back-ends (the installers). As regards the syntax and semantics of the C language, the producer is by default a strictly ANSI C compliant compiler. (Addition to the October 1993 revision : Alas, this is no longer true; however strict ANSI can be specified by means of a simple command line option (see [1]). The decision whether to make the default strict and allow people to relax it, or to make the default lenient and allow people to strengthen it, is essentially a political one. It does not really matter in technical terms provided the user is made aware of exactly what each compilation mode means in terms of syntax, semantics and portability checking.) However it is possible to change its behaviour (again by means of #pragma statements) to implement many of the features found in "traditional" or "K&R" C. Hence it is possible to precisely determine how the producer will interpret the C code it is given by explicitly describing the C dialect it is written in in terms of these #pragma statements.

2.3.2. C to TDF Mappings

The nature of the C to TDF transformation implemented by the producer is worth considering, although not all the features described in this section are fully implemented in the current (October 1993) producer. Although it is only indirectly related to questions of portability, this mapping does illustrate some of the problems the producer has in trying to represent program in an architecture neutral manner.

Once the initial difficulty of overcoming the syntactic and semantic differences between the various C dialects is overcome, the C to TDF mapping is quite straightforward. In a hierarchy from high level to low level languages C and TDF are not that dissimilar - both come towards the bottom of what may legitimately be regarded as high level languages. Thus the constructs in C map easily onto the constructs of TDF (there are a few exceptions, for example coercing integers to pointers, which are discussed in [3]). Eccentricities of the C language specification such as doing all integer arithmetic in the promoted integer type are translated explicitly into TDF. So to add two char's, they are promoted to int's, added together as int's, and the result is converted back to a char. These rules are not built directly into TDF because of the desire to support languages other than C (and even other C dialects).

A number of issues arise when tokens are introduced. Consider for example the type size_t from the ANSI C89 standard. This is a target dependent integer type, so bearing in mind what was said above it is natural for the producer to use a tokenised variety (the TDF representation of integer types) to stand for size_t. This is done by a #pragma token statement of the form:

#pragma token VARIETY size_t # c89.stddef.size_t

But if we want to do arithmetic on size_t's we need to know the integer type corresponding to the integral promotion of size_t . But this is again target dependent, so it makes sense to have another tokenised variety representing the integral promotion of size_t. Thus the simple token directive above maps to (at least) two TDF tokens, the type itself and its integral promotion.

As another example, suppose that we have a target dependent C type, type say, and we define a procedure which takes an argument of type type. In both the procedure body and at any call of the procedure the TDF we need to produce to describe how C passes this argument will depend on type. This is because C does not treat all procedure argument types uniformly. Most types are passed by value, but array types are passed by address. But whether or not type is an array type is target dependent, so we need to use tokens to abstract out the argument passing mechanism. For example, we could implement the mechanism using four tokens : one for the type type (which will be a tokenised shape), one for the type an argument of type type is passed as, arg_type say, (which will be another tokenised shape), and two for converting values of type type to and from the corresponding values of type arg_type (these will be tokens which take one exp argument and give an exp). For most types, arg_type will be the same as type and the conversion tokens will be identities, but for array types, arg_type will be a pointer to type and the conversion tokens will be "address of" and "contents of".

So there is not the simple one to one correspondence between #pragma token directives and TDF tokens one might expect. Each such directive maps onto a family of TDF tokens, and this mapping in a sense encapsulates the C language specification. Of course in the TDF library building process the definitions of all these tokens are deduced automatically from the local values.

2.3.3. TDF Linking

We now move from considering the components of the producer to those of the installer. The first phase of the installation - linking in the TDF libraries containing the token definitions describing the local implementation of the API - is performed by a general utility program, the TDF linker (or builder). This is a very simple program which is used to combine a number of TDF capsules and libraries into a single capsule. As has been emphasised previously, the capsule structure means that this is a very natural operation, but, as will be seen from the previous discussion (particularly section 2.2.3), such combinatorial phases are very prone to namespace problems.

In TDF tags, tokens and other externally named objects occupy separate namespaces, and there are no constructs which can cut across these namespaces in the way that the C macros do. There still remains the problem that the only way to know that two tokens, say, in different capsules are actually the same is if they have the same name. This, as we have already seen in the case of system linking, can cause objects to be identified wrongly.

In the main TDF linking phase - linking in the token definitions at the start of the installation - we are primarily linking on token names, these tokens being those arising from the use of the target independent headers. Potential namespace problems are virtually eliminated by the use of unique external names for the tokens in these headers (such as c89.stdio.FILE in the example above). This means that there is a genuine one to one correspondence between tokens and token names. Of course this relies on the external token names given in the headers being genuinely unique. In fact, as is explained below, these names are normally automatically generated, and uniqueness of names within a given API is checked. Also incorporating the API name into the token name helps to ensure uniqueness across APIs. However the token namespace does require careful management. (Note that the user does not normally have access to the token namespace; all variable and procedure names map into the tag namespace.)

We can illustrate the "clean" nature of TDF linking by considering the st_atime example given in section 2.2.3. Recall that in the traditional compilation scheme the problem arose, not because of the program or the API implementation, but because of the way they were combined by the pre-processor. In the TDF scheme the target independent version of sys/stat.h will be included. Thus the procedure name st_atime and the field selector st_atime will be seen to belong to genuinely different namespaces - there are no macros to disrupt this. The former will be translated into a TDF tag with external name st_atime, whereas the latter is translated into a token with external name posix.stat.struct_stat.st_atime , say. In the TDF library reflecting the API implementation, the token posix.stat.struct_stat.st_atime will be defined precisely as the system header intended, as the offset corresponding to the C field selector st_atim.st__sec. The fact that this token is defined using a macro rather than a conventional direct field selector is not important to the library building process. Now the combination of the program with the API implementation in this case is straightforward - not only are the procedure name and the field selector name in the TDF now different, but they also lie in distinct namespaces. This shows how the separation of the API implementation from the main program is cleaner in the TDF compilation scheme than in the traditional scheme.

TDF linking also opens up new ways of combining code which may solve some other namespace problems. For example, in the open example in section 2.2.3, the name open is meant to be internal to the program. It is the fact that it is not treated as such which leads to the problem. If the program consisted of a single source file then we could make open a static procedure, so that its name does not appear in the external namespace. But if the program consists of several source files the external name is necessary for intra-program linking. The TDF linker allows this intra-program linking to be separated from the main system linking. In the TDF compilation scheme described above each source file is translated into a separate TDF capsule, which is installed separately to a binary object file. It is only the system linking which finally combines the various components into a single program. An alternative scheme would be to use the TDF linker to combine all the TDF capsules into a single capsule in the production phase and install that. Because all the intra-program linking has already taken place, the external names required for it can be "hidden" - that is to say, removed from the tag namespace. Only tag names which are used but not defined (and so are not internal to the program) and main should not be hidden. In effect this linking phase has made all the internal names in the program (except main) static.

In fact this type of complete program linking is not always feasible. For very large programs the resulting TDF capsule can to be too large for the installer to cope with (it is the system assembler which tends to cause the most problems). Instead it may be better to use a more judiciously chosen partial linking and hiding scheme.

2.3.4. The TDF Installers

The TDF installer on a given machine typically consists of four phases: TDF linking, which has already been discussed, translating TDF to assembly source code, translating assembly source code to a binary object file, and linking binary object files with the system libraries to form the final executable. The latter two phases are currently implemented by the system assembler and linker, and so are identical to the traditional compilation scheme.

It is the TDF to assembly code translator which is the main part of the installer. Although not strictly related to the question of portability, the nature of the translator is worth considering. Like the producer (and the assembler), it is a transformational, as opposed to a combinatorial, compilation phase. But whereas the transformation from C to TDF is "difficult" because of the syntax and semantics of C and the need to represent everything in an architecture neutral manner, the transformation from TDF to assembly code is much easier because of the unambiguous syntax and uniform semantics of TDF, and because now we know the details of the target machine, it is no longer necessary to work at such an abstract level.

The whole construction of the current generation of TDF translators is based on the concept of compilation as transformation. They represent the TDF they read in as a syntax tree, virtually identical to the syntax tree comprising the TDF. The translation process then consists of continually applying transformations to this tree - in effect TDF to TDF transformations - gradually optimising it and changing it to a form where the translation into assembly source code is a simple transcription process (see [7]).

Even such operations as constant evaluation - replacing 1 + 1 by 2 in the example above - may be regarded as TDF to TDF transformations. But so may more complex optimisations such as taking constants out of a loop, common sub-expression elimination, strength reduction and so on. Some of these transformations are universally applicable, others can only be applied on certain classes of machines. This transformational approach results in high quality code generation (see [5]) while minimising the risk of transformational errors. Moreover the sharing of so much code - up to 70% - between all the TDF translators, like the introduction of a common front-end, further reduces the exposure to compiler bugs.

Much of the machine ABI information is built into the translator in a very simple way. For example, to evaluate the offset of the field b in the structure struct tag above, the producer has already done all the hard work, providing a formula for the offset in terms of the sizes and alignments of the basic C types. The translator merely provides these values and the offset is automatically evaluated by the constant evaluation transformations. Other aspects of the ABI, for example the procedure argument and result passing conventions, require more detailed attention.

One interesting range of optimisations implemented by many of the current translators consists of the inlining of certain standard procedure calls. For example, strlen ( "hello" ) is replaced by 5. As it stands this optimisation appears to run the risk of corrupting the programmer's namespace - what if strlen was a user-defined procedure rather than the standard library routine (cf. the open example in section 2.2.3)? This risk only materialises however if we actually use the procedure name to spot this optimisation. In code compiled from the target independent headers all calls to the library routine strlen will be implemented by means of a uniquely named token, c89.string.strlen say. It is by recognising this token name as the token is expanded that the translators are able to ensure that this is really the library routine strlen.

Another example of an inlined procedure of this type is alloca. Many other compilers inline alloca, or rather they inline __builtin_alloca and rely on the programmer to identify alloca with __builtin_alloca. This gets round the potential namespace problems by getting the programmer to confirm that alloca in the program really is the library routine alloca. By the use of tokens this information is automatically provided to the TDF translators.

2.4. TDF and APIs

What the discussion above has emphasised is that the ability to describe APIs abstractly as target independent headers underpins the entire TDF approach to portability. We now consider this in more detail.

2.4.1. API Description

The process of transforming an API specification into its description in terms of #pragma token directives is a time-consuming but often fascinating task. In this section we discuss some of the issues arising from the process of describing an API in this way. The Description Process

As may be observed from the example given in section 3.2.1, the #pragma token syntax is not necessarily intuitively obvious. It is designed to be a low-level description of tokens which is capable of expressing many complex token specifications. Most APIs are however specified in C-like terms, so an alternative syntax, closer to C, has been developed in order to facilitate their description. This is then transformed into the corresponding #pragma token directives by a specification tool called tspec (see [2]), which also applies a number of checks to the input and generates the unique token names. For example, the description leading to the example above was:

+EXP FILE *stdout ;
+FUNC int fputs ( const char *, FILE * ) ;

Note how close this is to the English language specification of the API given previously. (There are a number of open issues relating to tspec and the #pragma token syntax, mainly concerned with determining the type of syntactic statements that it is desired to make about the APIs being described. The current scheme is adequate for those APIs so far considered, but it may need to be extended in future.)

tspec is not capable of expressing the full power of the #pragma token syntax. Whereas this makes it easier to use in most cases, for describing the normal C-like objects such as types, expressions and procedures, it cannot express complex token descriptions. Instead it is necessary to express these directly in the #pragma token syntax. However this is only rarely required : the constructs offsetof, va_start and va_arg from ANSI C89 are the only examples so far encountered during the API description programme at DRA. For example, va_arg takes an assignable expression of type va_list and a type t and returns an expression of type t. Clearly, this cannot be expressed abstractly in C-like terms; so the #pragma token description:

#pragma token PROC ( EXP lvalue : va_list : e, TYPE t )\
	EXP rvalue : t : va_arg # c89.stdarg.va_arg

must be used instead.

Most of the process of describing an API consists of going through its English language specification transcribing the object specifications it gives into the tspec syntax (if the specification is given in a machine readable form this process can be partially automated). The interesting part consists of trying to interpret what is written and reading between the lines as to what is meant. It is important to try to represent exactly what is in the specification rather than being influenced by one's knowledge of a particular implementation, otherwise the API checking phase of the compilation will not be checking against what is actually in the API but against a particular way of implementing it.

There is a continuing API description programme at DRA. The current status (October 1993) is that ANSI C89 (X3.159), POSIX (1003.1), XPG3 (X/Open Portability Guide 3) and SVID (System V Interface Definition, 3rd Edition) have been described and extensively tested. POSIX2 (1003.2), XPG4, AES (Revision A), X11 (Release 5) and Motif (Version 1.1) have been described, but not yet extensively tested.

There may be some syntactic information in the paper API specifications which tspec (and the #pragma token syntax) is not yet capable of expressing. In particular, some APIs go into very careful management of namespaces within the API, explicitly spelling out exactly what should, and should not, appear in the namespaces as each header is included (see the appendix on namespaces and APIs below). What is actually being done here is to regard each header as an independent sub-API. There is not however a sufficiently developed "API calculus" to allow such relationships to be easily expressed. Resolving Conflicts

Another consideration during the description process is to try to integrate the various API descriptions. For example, POSIX extends ANSI C89, so it makes sense to have the target independent POSIX headers include the corresponding ANSI C89 headers and just add the new objects introduced by POSIX. This does present problems with APIs which are basically compatible but have a small number of incompatibilities, whether deliberate or accidental. As an example of an "accidental" incompatibility, XPG3 is an extension of POSIX, but whereas POSIX declares malloc by means of the prototype:

void *malloc(size_t);

XPG3 declares it by means of the traditional procedure declaration:

void *malloc(s)
size_t s;

These are surely intended to express the same thing, but in the first case the argument is passed as a size_t and in the second it is firstly promoted to the integer promotion of size_t. On most machines these are compatible, either because of the particular implementation of size_t, or because the procedure calling conventions make them compatible. However in general they are incompatible, so the target independent headers either have to reflect this or have to read between the lines and assume that the incompatibility was accidental and ignore it.

As an example of a deliberate incompatibility, both XPG3 and SVID3 declare a structure struct msqid_ds in sys/msg.h which has fields msg_qnum and msg_qbytes. The difference is that whereas XPG3 declares these fields to have type unsigned short, SVID3 declares them to have type unsigned long. However for most purposes the precise types of these fields is not important, so the APIs can be unified by making the types of these fields target dependent. That is to say, tokenised integer types __msg_q_t and __msg_l_t are introduced. On XPG3-compliant machines these will both be defined to be unsigned short, and on SVID3-compliant machines they will both be unsigned long. So, although strict XPG3 and strict SVID3 are incompatible, the two extension APIs created by adding these types are compatible. In the rare case when the precise type of these fields is important, the strict APIs can be recovered by defining the field types to be unsigned short or unsigned long at produce-time rather than at install-time. (XPG4 uses a similar technique to resolve this incompatibility. But whereas the XPG4 types need to be defined explicitly, the tokenised types are defined implicitly according to whatever the field types are on a particular machine.)

This example shows how introducing extra abstractions can resolve potential conflicts between APIs. But it may also be used to resolve conflicts between the API specification and the API implementations. For example, POSIX specifies that the structure struct flock defined in fcntl.h shall have a field l_pid of type pid_t. However on at least two of the POSIX implementations examined at DRA, pid_t was implemented as an int, but the l_pid field of struct flock was implemented as a short (this showed up in the TDF library building process). The immediate reaction might be that these system have not implemented POSIX correctly, so they should be cast into the outer darkness. However for the vast majority of applications, even those which use the l_pid field, its precise type is not important. So the decision was taken to introduce a tokenised integer type, __flock_pid_t, to stand for the type of the l_pid field. So although the implementations do not conform to strict POSIX, they do to this slightly more relaxed extension. Of course, one could enforce strict POSIX by defining __flock_pid_t to be pid_t at produce-time, but the given implementations would not conform to this stricter API.

Both the previous two examples are really concerned with the question of determining the correct level of abstraction in API specification. Abstraction is inclusive and allows for API evolution, whereas specialisation is exclusive and may lead to dead-end APIs. The SVID3 method of allowing for longer messages than XPG3 - changing the msg_qnum and msg_qbytes fields of struct msqid_ds from unsigned short to unsigned long - is an over-specialisation which leads to an unnecessary conflict with XPG3. The XPG4 method of achieving exactly the same end - abstracting the types of these fields - is, by contrast, a smooth evolutionary path. The Benefits of API Description

The description process is potentially of great benefit to bodies involved in API specification. While the specification itself stays on paper the only real existence of the API is through its implementations. Giving the specification a concrete form means not only does it start to be seen as an object in its own right, rather than some fuzzy document underlying the real implementations, but also any omissions, insufficient specifications (where what is written down does not reflect what the writer actually meant) or built-in assumptions are more apparent. It may also be able to help show up the kind of over-specialisation discussed above. The concrete representation also becomes an object which both applications and implementations can be automatically checked against. As has been mentioned previously, the production phase of the compilation involves checking the program against the abstract API description, and the library building phase checks the syntactic aspect of the implementation against it.

The implementation checking aspect is considered below. Let us here consider the program checking aspect by re-examining the examples given in section The SIGKILL example is straightforward; SIGKILL will appear in the POSIX version of signal.h but not the ANSI version, so if the program is compiled with the target independent ANSI C89 headers it will be reported as being undefined. In a sense this is nothing to do with the #pragma token syntax, but with the organisation of the target independent headers. The other examples do however rely on the fact that the #pragma token syntax can express syntactic information in a way which is not possible directly from C. Thus the target independent headers express exactly the fact that time_t is an arithmetic type, about which nothing else is known. Thus ( t & 1 ) is not type correct for a time_t t because the binary & operator does not apply to all arithmetic types. Similarly, for the type div_t the target independent headers express the information that there exists a structure type div_t and field selectors quot and rem of div_t of type int, but nothing about the order of these fields or the existence of other fields. Thus any attempt to initialise a div_t will fail because the correspondence between the values in the initialisation and the fields of the structure is unknown. The struct dirent example is entirely analogous, except that here the declarations of the structure type struct dirent and the field selector d_name appear in both the POSIX and XPG3 versions of dirent.h, whereas the field selector d_ino appears only in the XPG3 version.

2.4.2. TDF Library Building

As we have said, two of the primary problems with writing portable programs are dealing with API implementation errors on the target machines - objects not being defined, or being defined in the wrong place, or being implemented incorrectly - and namespace problems - particularly those introduced by the system headers. The most interesting contrast between the traditional compilation scheme () and the TDF scheme () is that in the former the program comes directly into contact with the "real world" of messy system headers and incorrectly implemented APIs, whereas in the latter there is an "ideal world" layer interposed. This consists of the target independent headers, which describe all the syntactic features of the API where they are meant to be, and with no extraneous material to clutter up the namespaces (like index and the macro st_atime in the examples given in section 2.2.3), and the TDF libraries, which can be combined "cleanly" with the program without any namespace problems. All the unpleasantness has been shifted to the interface between this "ideal world" and the "real world"; that is to say, the TDF library building.

The importance of this change may be summarised by observing that previously all the unpleasantnesses happened in the left hand side of the diagram (the program half), whereas in the TDF scheme they are in the right hand side (the API half). So API implementation problems are seen to be a genuinely separate issue from the main business of writing programs; the ball is firmly in the API implementor's court rather than the programmer's. Also the problems need to be solved once per API rather than once per program.

It might be said that this has not advanced us very far towards actually dealing with the implementation errors. The API implementation still contains errors whoever's responsibility it is. But the TDF library building process gives the API implementor a second chance. Many of the syntactic implementation problems will be shown up as the library builder compares the implementation against the abstract API description, and it may be possible to build corrections into the TDF libraries so that the libraries reflect, not the actual implementation, but some improved version of it.

To show how this might be done, we reconsider the examples of API implementation errors given in section As before we may divide our discussion between system header problems and system library problems. Recall however the important distinction, that whereas previously the programmer was trying to deal with these problems in a way which would work on all machines (top left of the compilation diagrams), now the person building the TDF libraries is trying to deal with implementation problems for a particular API on a particular machine (bottom right). System Header Problems

Values which are defined in the wrong place, such as SEEK_SET in the example given, present no difficulties. The library builder will look where it expects to find them and report that they are undefined. To define these values it is merely a matter of telling the library builder where they are actually defined (in unistd.h rather than stdio.h).

Similarly, values which are undefined are also reported. If these values can be deduced from other information, then it is a simple matter to tell the library builder to use these deduced values. For example, if EXIT_SUCCESS and EXIT_FAILURE are undefined, it is probably possible to deduce their values from experimentation or experience (or guesswork).

Wrongly defined values are more difficult. Firstly they are not necessarily detected by the library builder because they are semantic rather than syntactic errors. Secondly, whereas it is easy to tell the library builder to use a corrected value rather than the value given in the implementation, this mechanism needs to be used with circumspection. The system libraries are provided pre-compiled, and they have been compiled using the system headers. If we define these values differently in the TDF libraries we are effectively changing the system headers, and there is a risk of destroying the interface with the system libraries. For example, changing a structure is not a good idea, because different parts of the program - the main body and the parts linked in from the system libraries - will have different ideas of the size and layout of this structure. (See the struct flock example in section for a potential method of resolving such implementation problems.)

In the two cases given above - DBL_MAX and size_t - the necessary changes are probably "safe". DBL_MAX is not a special value in any library routines, and changing size_t from int to unsigned int does not affect its size, alignment or procedure passing rules (at least not on the target machines we have in mind) and so should not disrupt the interface with the system library. System Library Problems

Errors in the system libraries will not be detected by the TDF library builder because they are semantic errors, whereas the library building process is only checking syntax. The only realistic ways of detecting semantic problems is by means of test suites, such as the Plum-Hall or CVSA library tests for ANSI C89 and VSX for XPG3, or by detailed knowledge of particular API implementations born of personal experience. However it may be possible to build workarounds for problems identified in these tests into the TDF libraries.

For example, the problem with realloc discussed in section could be worked around by defining the token representing realloc to be the equivalent of:

#define realloc ( p, s ) ( void *q = ( p ) ? ( realloc ) ( q, s ) : malloc ( s ) )

(where the C syntax has been extended to allow variables to be introduced inside expressions) or:

static void *__realloc ( void *p, size_t s )
	if ( p == NULL )
		return ( malloc ( s ) ) ;
	return ( ( realloc ) ( p, s ) ) ;

#define realloc ( p, s ) __realloc ( p, s )

Alternatively, the token definition could be encoded directly into TDF (not via C), using the TDF notation compiler (see [9]). TDF Library Builders

The discussion above shows how the TDF libraries are an extra layer which lies on top of the existing system API implementation, and how this extra layer can be exploited to provide corrections and workarounds to various implementation problems. The expertise of particular API implementation problems on particular machines can be captured once and for all in the TDF libraries, rather than being spread piecemeal over all the programs which use that API implementation. But being able to encapsulate this expertise in this way makes it a marketable quantity. One could envisage a market in TDF libraries: ranging from libraries closely reflecting the actual API implementation to top of the range libraries with many corrections and workarounds built in.

All of this has tended to paint the system vendors as the villains of the piece for not providing correct API implementations, but this is not entirely fair. The reason why API implementation errors may persist over many operating system releases is that system vendors have as many porting problems as anyone else - preparing a new operating system release is in effect a huge porting exercise - and are understandably reluctant to change anything which basically works. The use of TDF libraries could be a low-risk strategy for system vendors to allow users the benefits of API conformance without changing the underlying operating system.

Of course, if the system vendor's porting problems could be reduced, they would have more confidence to make their underlying systems more API conformant, and thereby help reduce the normal programmer's porting problems. So whereas using the TDF libraries might be a short-term workaround for API implementation problems, the rest of the TDF porting system might help towards a long-term solution.

Another interesting possibility arises. As we said above, many APIs, for example POSIX and BSD, offer equivalent functionality by different methods. It may be possible to use the TDF library building process to express one in terms of the other. For example, in the struct dirent example10 in section 2.3.3, the only differences between POSIX and BSD were that the BSD version was defined in a different header and that the structure was called struct direct. But this presents no problems to the TDF library builder : it is perfectly simple to tell it to look in sys/dir.h instead of dirent.h , and to identify struct direct with struct dirent. So it may be possible to build a partial POSIX lookalike on BSD systems by using the TDF library mechanism.

2.5. TDF and Conditional Compilation

So far our discussion of the TDF approach to portability has been confined to the simplest case, where the program itself contains no target dependent code. We now turn to programs which contain conditional compilation. As we have seen, many of the reasons why it is necessary to introduce conditional compilation into the traditional compilation process either do not arise or are seen to be distinct phases in the TDF compilation process. The use of a single front-end (the producer) virtually eliminates problems of compiler limitations and differing interpretations and reduces compiler bug problems, so it is not necessary to introduce conditionally compiled workarounds for these. Also API implementation problems, another prime reason for introducing conditional compilation in the traditional scheme, are seen to be isolated in the TDF library building process, thereby allowing the programmer to work in an idealised world one step removed from the real API implementations. However the most important reason for introducing conditional compilation is where things, for reasons of efficiency or whatever, are genuinely different on different machines. It is this we now consider.

2.5.1. User-Defined APIs

The things which are done genuinely differently on different machines have previously been characterised as comprising the user-defined component of the API. So the real issue in this case is how to use the TDF API description and representation methods within one's own programs. A very simple worked example is given below (in section 3.5.2), for more detailed examples see [8].

For the MSB example given in section 2.3 we firstly have to decide what the user-defined API is. To fully reflect exactly what the target dependent code is, we could define the API, in tspec terms, to be:

+MACRO unsigned char MSB ( unsigned int a ) ;

where the macro MSB gives the most significant byte of its argument, a. Let us say that the corresponding #pragma token statement is put into the header msb.h. Then the program can be recast into the form:

#include <stdio.h>
#include "msb.h"

unsigned int x = 100000000 ;

int main ()
	printf ( "%u\n", MSB ( x ) ) ;
	return ( 0 ) ;

The producer will compile this into a target independent TDF capsule which uses a token to represent the use of MSB, but leaves this token undefined. The only question that remains is how this token is defined on the target machine; that is, how the user-defined API is implemented. On each target machine a TDF library containing the local definition of the token representing MSB needs to be built. There are two basic possibilities. Firstly the person performing the installation could build the library directly, by compiling a program of the form:

#pragma implement interface "msb.h"
#include "config.h"

#ifndef SLOW_SHIFT
#define MSB ( a ) ( ( unsigned char ) ( a >> 24 ) )
#define MSB ( a ) *( ( unsigned char * ) &( a ) )
#define MSB ( a ) *( ( unsigned char * ) &( a ) + 3 )

with the appropriate config.h to choose the correct local implementation of the interface described in msb.h. Alternatively the programmer could provide three alternative TDF libraries corresponding to the three implementations, and let the person installing the program choose between these. The two approaches are essentially equivalent, they just provide for making the choice of the implementation of the user-defined component of the API in different ways. An interesting alternative approach would be to provide a short program which does the selection between the provided API implementations automatically. This approach might be particularly effective in deciding which implementation offers the best performance on a particular target machine.

2.5.2. 3.5.2. User Defined Tokens - Example

As an example of how to define a simple token consider the following example. We have a simple program which prints hello in some language, the language being target dependent. Our first task is choose an API. We choose ANSI C89 extended by a tokenised object hello of type char * which gives the message to be printed. This object will be an rvalue (i.e. it cannot be assigned to). For convenience this token is declared in a header file, tokens.h say. This particular case is simple enough to encode by hand; it takes the form:

#pragma token EXP rvalue : char * : hello #
#pragma interface hello

consisting of a #pragma token directive describing the object to be tokenised, and a #pragma interface directive to show that this is the only object in the API. An alternative would be to generate tokens.h from a tspec specification of the form:

+EXP char *hello ;

The next task is to write the program conforming to this API. This may take the form of a single source file, hello.c, containing the lines:

#include <stdio.h>
#include "tokens.h"

int main ()
	printf ( "%s\n", hello ) ;
	return ( 0 ) ;

The production process may be specified by means of a Makefile. This uses the TDF C compiler, tcc, which is an interface to the TDF system which is designed to be like cc, but with extra options to handle the extra functionality offered by the TDF system (see [1]).

produce : hello.j

hello.j : hello.c tokens.h
	tcc -Fj hello.c

The production is run by typing make produce. The ANSI C89 API is the default, and so does not need to be specified to tcc. The program hello.c is compiled to a target independent capsule, hello.j. This will use a token to represent hello, but it will be left undefined.

On each target machine we need to create a token library giving the local definitions of the objects in the API. We shall assume that the library corresponding to the ANSI C89 API has already been constructed, so that we only need to define the token representing hello. This is done by means of a short C program, tokens.c, which implements the tokens declared in tokens.h. This might take the form:

#pragma implement interface "tokens.h"
#define hello "bonjour"

to define hello to be "bonjour". On a different machine, the definition of hello could be given as hello, guten Tag, zdrastvuyte (excuse my transliteration) or whatever (including complex expressions as well as simple strings). Note the use of #pragma implement interface to indicate that we are now implementing the API described in tokens.h, as opposed to the use of #include earlier when we were just using the API.

The installation process may be specified by adding the following lines to the Makefile:

install : hello

hello : hello.j tokens.tl
	tcc -o hello -J. -jtokens hello.j

tokens.tl : tokens.j
	tcc -Ymakelib -o tokens.tl tokens.j

tokens.j : tokens.c tokens.h
	tcc -Fj -not_ansi tokens.c

The complete installation process is run by typing make install. Firstly the file tokens.c is compiled to give the TDF capsule tokens.j containing the definition of hello. The -not_ansi flag is needed because tokens.c does not contain any real C (declarations or definitions), which is not allowed in ANSI C. The next step is to turn the capsule tokens.j into a TDF library, tokens.tl, using the -Ymakelib option to tcc (with older versions of tcc it may be necessary to change this option to -Ymakelib -M -Fj). This completes the API implementation.

The final step is installation. The target independent TDF, hello.j, is linked with the TDF libraries tokens.tl and c89.tl (which is built into tcc as default) to form a target dependent TDF capsule with all the necessary token definitions, which is then translated to a binary object file and linked with the system libraries. All of this is under the control of tcc.

Note the four stages of the compilation : API specification, production, API implementation and installation, corresponding to the four regions of the compilation diagram ().

2.5.3. 3.5.3. Conditional Compilation within TDF

Although tokens are the main method used to deal with target dependencies, TDF does have built-in conditional compilation constructs. For most TDF sorts X (for example, exp, shape or variety) there is a construct X_cond which takes an exp and two X's and gives an X. The exp argument will evaluate to an integer constant at install time. If this is true (nonzero), the result of the construct is the first X argument and the second is ignored; otherwise the result is the second X argument and the first is ignored. By ignored we mean completely ignored - the argument is stepped over and not decoded. In particular any tokens in the definition of this argument are not expanded, so it does not matter if they are undefined.

These conditional compilation constructs are used by the C to TDF producer to translate certain statements containing:

#if condition

where condition is a target dependent value. Thus, because it is not known which branch will be taken at produce time, the decision is postponed to install time. If condition is a target independent value then the branch to be taken is known at produce time, so the producer only translates this branch. Thus, for example, code surrounded by #if 0 ... #endif will be ignored by the producer.

Not all such #if statements can be translated into TDF X_cond constructs. The two branches of the #if statement are translated into the two X arguments of the X_cond construct; that is, into sub-trees of the TDF syntax tree. This can only be done if each of the two branches is syntactically complete.

The producer interprets #ifdef (and #ifndef) constructs to mean, is this macro is defined (or undefined) at produce time? Given the nature of pre-processing in C this is in fact the only sensible interpretation. But if such constructs are being used to control conditional compilation, what is actually intended is, is this macro defined at install time? This distinction is necessitated by the splitting of the TDF compilation into production and installation - it does not exist in the traditional compilation scheme. For example, in the mips example in section 2.3, whether or not mips is defined is intended to be an installer property, rather than what it is interpreted as, a producer property. The choice of the conditional compilation path may be put off to install time by, for example, changing #ifdef mips to #if is_mips where is_mips is a tokenised integer which is either 1 (on those machines on which mips would be defined) or 0 (otherwise). In fact in view of what was said above about syntactic completeness, it might be better to recast the program as:

#include <stdio.h>
#include "user_api.h" /* For the spec of is_mips */

int main ()
	if ( is_mips ) {
		fputs ( "This machine is a mips\n", stdout ) ;
	return ( 0 ) ;

because the branches of an if statement, unlike those of an #if statement, have to be syntactically complete is any case. The installer will optimise out the unnecessary test and any unreached code, so the use of if ( condition ) is guaranteed to produce as efficient code as #if condition.

In order to help detect such "installer macro" problems the producer has a mode for detecting them. All #ifdef and #ifndef constructs in which the compilation path to be taken is potentially target dependent are reported (see [3] and [8]).

The existence of conditional compilation within TDF also gives flexibility in how to approach expressing target dependent code. Instead of a "full" abstraction of the user-defined API as target dependent types, values and functions, it can be abstracted as a set of binary tokens (like is_mips in the example above) which are used to control conditional compilation. This latter approach can be used to quickly adapt existing programs to a TDF-portable form since it is closer to the "traditional" approach of scattering the program with #ifdef's and #ifndef's to implement target dependent code. However the definition of a user-defined API gives a better separation of target independent and target dependent code, and the effort to define such as API may often be justified. When writing a new program from scratch the API rather than the conditional compilation approach is recommended.

The latter approach of a fully abstracted user-defined API may be more time consuming in the short run, but this may well be offset by the increased ease of porting. Also there is no reason why a user-defined API, once specified, should not serve more than one program. Similar programs are likely to require the same abstractions of target dependent constructs. Because the API is a concrete object, it can be reused in this way in a very simple fashion. One could envisage libraries of private APIs being built up in this way.

2.5.4. Alternative Program Versions

Consider again the program described in section 2.3.4 which has optional features for displaying its output graphically depending on the boolean value HAVE_X_WINDOWS. By making HAVE_X_WINDOWS part of the user-defined API as a tokenised integer and using:


to conditionally compile the X Windows code, the choice of whether or not to use this version of the program is postponed to install time. If both POSIX and X Windows are implemented on the target machine the installation is straightforward. HAVE_X_WINDOWS is defined to be true, and the installation proceeds as normal. The case where only POSIX is implemented appears to present problems. The TDF representing the program will contain undefined tokens representing objects from both the POSIX and X Windows APIs. Surely it is necessary to define these tokens (i.e. implement both APIs) in order to install the TDF. But because of the use of conditional compilation, all the applications of X Windows tokens will be inside X_cond constructs on the branch corresponding to HAVE_X_WINDOWS being true. If it is actually false then these branches are stepped over and completely ignored. Thus it does not matter that these tokens are undefined. Hence the conditional compilation constructs within TDF give the same flexibility in the API implementation is this case as do those in C.

3. SORTs and TOKENs

  1. 3.1. Token applications and first-class SORTs
  2. 3.2. Token definitions and declarations
  3. 3.3. A simple use of a TOKEN
  4. 3.4. Second class SORTs

In the syntax of language like C or Pascal, we find various syntactic units like <Expression>, <Identifier> etc. A SORT bears the same relation to TDF as these syntactic units bear to the language; roughly speaking, the syntactic unit <Expression> corresponds to the SORT EXP and <Identifier> to TAG. However, instead of using BNF to compose syntactic units from others, TDF uses explicit constructors to compose its SORTs; each constructor uses other pieces of TDF of specified SORTs to make a piece of its result SORT. For example, the constructor plus uses an ERROR_TREATMENT and two EXPs to make another EXP.

At the moment, there are 58 different SORTS, from ACCESS to VARIETY given in tables 1 and 2. Some of these have familiar analogues in standard language construction as with EXP and TAG above. Others will be less familiar since TDF must concern itself with issues not normally addressed in language definitions. For example, the process of linking together TDF programs is at the root of the architecture neutrality of TDF and so must form an integral part of its definition. On the other hand, TDF is not meant to be a language readily accessible to the human reader or writer; computers handle it much more easily. Thus a great many choices have been made in the definition which would be intolerable in a standard language definition for the human programmer but which, paradoxically enough, make it much simpler for a computer to produce and analyse TDF.

The SORTs and constructors in effect form a multi-sorted algebra. There were two principal reasons for choosing this algebraic form of definition. First, it is easy to extend - a new operation on existing constructs simply requires a new constructor. Secondly, the algebraic form is highly amenable to the automatic construction of programs. Large parts of both TDF producers and TDF translators have been created by automatic transformation of the text of the specification document itself, by extracting the algebraic signature and constructing C program which can read or produce TDF. To this extent, one can regard the specification document as a formal description of the free algebra of TDF SORTs and constructors. Of course, most of the interesting parts of the definition of TDF lies in the equivalences of parts of TDF, so this formality only covers the easy bit.

Another distinction between the TDF definition and language syntactic description is that TDF is to some extent conscious of its own SORTs so that it can specify a new construction of a given SORT. The analogy in normal languages would be that one could define a new construction with new syntax and say this is an example of an <Expression>, for example; I don't know of any standard language which permits this, although those of you with a historical bent might remember Algol-N which made a valiant attempt at it. Of course, the algebraic method of description makes it much easier to specify, rather than having to give syntax to provide the syntax for the new construction in a language.

3.1. Token applications and first-class SORTs

A new construction is introduced by the SORT TOKEN; the constructors involving TOKENs allow one to give an expansion for the TOKEN in terms of other pieces of TDF, possibly including parameters. We can encapsulate a (possibly parameterised) fragment of TDF of a suitable SORT by giving it a TOKEN as identification. Not all of the SORTs are available for this kind of encapsulation - only those which have a SORTNAME constructor (from access to variety). These are the "first-class" SORTs given in table 1. Each of these have an appropriate _apply_token constructor (e.g. exp_apply_token) give the expansion. Most of these also have _cond constructors (e.g.see exp_cond in section 9.1) which allows translate time conditional expansion of the SORT.

ACCESSProperties of TAGsAL_TAGName for alignment
ALIGNMENTAbstraction of data alignmentBITFIELD_VARIETYGives number of bits in bit-field with sign
BOOLtrue or falseERROR_TREATMENTHow to handle errors in operations
EXPPiece of TDF program, manipulating valuesFLOATING_VARIETYKind of floating point number of unbounded size
LABELMark on EXP to jump toNATNon-negative static number of unbounded size
NTESTTest in comparisonsPROCPROPSProperties of calls and definitions of procedures
ROUNDING_MODEHow to round floating point operationsSHAPEAbstraction of size and representation of values
SIGNED_NATStatic number of unbounded sizeSTRINGStatic string of n-bit integers
TAGName for a value in run-time programTRANSFER_MODEControls special contents and assignment operations
TOKENInstallation-time functionVARIETYKind of integer used in run-time program
Table 6. First class SORTs

Every TOKEN has a result SORT, i.e. the SORT of its resulting expansion and before it can be expanded, one must have its parameter SORTs. Thus, you can regard a TOKEN as having a type defined by its result and parameter SORTs and the _apply_token as the operator which expands the encapsulation and substitutes the parameters.

However, if we look at the signature of exp_apply_token:

token_value: TOKEN
token_args:  BITSTREAM param_sorts(token_value)
             -> EXP x

we are confronted by the mysterious BITSTREAM where one might expect to find the actual parameters of the TOKEN.

To explain BITSTREAMs requires a diversion into the bit-encoding of TDF. Constructors for a particular SORT are represented in a number of bits depending on the number of constructors for that SORT; the context will determine the SORT required, so no more bits are required. Thus since there is only one constructor for UNITs, no bits are required to represent make_unit; there are about 120 different constructors for EXPs so 7 bits are required to cover all the EXPs. The parameters of each constructor have known SORTs and so their representations are just concatenated after the representation of the constructor. [a] While this is a very compact representation, it suffers from the defect that one must decode it even just to skip over it. This is very irksome is some applications, notably the TDF linker which is not interested detailed expansions. Similarly, in translators there are places where one wishes to skip over a token application without knowledge of the SORTs of its parameters. Thus a BITSTREAM is just an encoding of some TDF, preceded by the number of bits it occupies. Applications can then skip over BITSTREAMs trivially. Similar considerations apply to BYTESTREAMs used elsewhere; here the encoding is preceded by the number of bytes in the encoding and is aligned to a byte boundary to allow fast copying.

3.2. Token definitions and declarations

Thus the token_args parameter of exp_apply_token is just the BITSTREAM formed from the actual parameters in the sequence described by the definition of the token_value parameter. This will be given in a TOKEN_DEFN somewhere with constructor token_definition:

result_sort: SORTNAME
tok_params:  LIST(TOKFORMALS)
body:        result_sort
             -> TOKEN_DEFN

The result_sort is the SORT of the construction of body; e.g. if result_sort is formed from exp then body would be constructed using the EXP constructors and one would use exp_apply_token to give the expansion. The list tok_params gives the formal parameters of the definition in terms of TOKFORMALS constructed using make_tok_formals:


The TDFINT tk will be the integer representation of the formal parameter expressed as a TOKEN whose result sort is sn (see more about name representation in section 3.1). To use the parameter in the body of the TOKEN_DEFN, one simply uses the _apply_token appropriate to sn.Note that sn may be a TOKEN but the result_sort may not.

Hence the BITSTREAM param_sorts(token_value) in the actual parameter of exp_apply_token above is simply formed by the catenation of constructions of the SORTs given by the SORTNAMEs in the tok_params of the TOKEN being expanded.

Usually one gives a name to a TOKEN_DEFN to form a TOKDEF using make_tokdef:

tok:       TDFINT
signature: OPTION(STRING)
           -> TOKDEF

Here, tok gives the name that will be used to identify the TOKEN whose expansion is given by def. Any use of this TOKEN (e.g. in exp_apply_token) will be given by make_token(tok). Once again, a BITSTREAM is used to encapsulate the TOKEN_DEFN.

The significance of the signature parameter is discussed in section 3.2.2.

Often, one wishes a token without giving its definition - the definition could, for example, be platform-dependent. A TOKDEC introduces such a token using make_tokdec:

tok:       TDFINT
signature: OPTION(STRING)
s:         SORTNAME
           -> TOKDEC

Here the SORTNAME, s, is given by token:

result: SORTNAME
				params: LIST(SORTNAME)
						-> SORTNAME

which gives the result and parameter SORTs of tok.

One can also use a TOKEN_DEFN in an anonymous fashion by giving it as an actual parameter of a TOKEN which itself demands a TOKEN parameter. To do this one simply uses use_tokdef:

      -> TOKEN

3.3. A simple use of a TOKEN

The crucial use of TOKENs in TDF is to provide abstractions of APIs (see section 10) but they are also used as shorthand for commonly occurring constructions. For example, given the TDF constructor plus, mentioned above, we could define a plus with only two EXP parameters more suitable to C by using the wrap constructor as the ERROR_TREATMENT:

make_tokdef(C_plus, empty, token_definition(exp(), (make_tokformals(exp(), l),
                                                    make_tokformals(exp(), r)),
            plus(wrap(), exp_apply_token(l, ()), exp_apply_token(r, ())))

3.4. Second class SORTs

Second class SORTs (given in table 2) cannot be TOKENised. These are the "syntactic units" of TDF which the user cannot extend; he can only produce them using the constructors defined in core-TDF.

Some of these constructors are implicit. For example, there are no explicit constructors for LIST or SLIST which are both used to form lists of SORTs; their construction is simply part of the encoding of TDF. However, it is forseen that LIST constructors would be highly desireable and there will probably extensions to TDF to promote LIST from a second-class SORT to a first-class one. This will not apply to SLIST or to the other SORTs which have implicit constructions. These include BITSTREAM, BYTESTREAM, TDFINT, TDFIDENT and TDFSTRING.

AL_TAGDEFAlignment name definitionAL_TAGDEF_PROPSBody of UNIT containing AL_TAGDEFs
BITSTREAMEncapsulation of a bit encodingBYTESTREAMEncapsulation of a byte encoding
CALLEESActual callee parametersCAPSULEIndependent piece of TDF program
CAPSULE_LINENumber and kind of linkable entities in CAPSULECASELIMBounds in case constructor
ERROR_CODEEncoding for exceptionsEXTERNALExternal name used to connect CAPSULE name
EXTERN_LINKList of LINKEXTERNs in CAPSULEGROUPList of UNITs with same identification
LINKConnects names in CAPSULELINKEXTERNUsed to connect CAPSULE names to outside world
LINKSList of LINKsLIST(AUX)List of AUX SORTs; may have SORTNAME later
OTAGEXPDescribes a formal parameterPROPSProgram info in a UNIT
SLIST(AUX)List of AUX SORTs; will not have SORTNAME laterSORTNAMESORT which can be parameter of TOKEN
TAGACCUsed in constructing proc formalsTAGDECDeclaration of TAG at UNIT level
TAGDEC_PROPSBody of UNIT containing TAGDECsTAGDEFDefinition of TAG at UNIT level
TAGDEF_PROPSBody of UNIT containing TAG DEFsTAGSHACCA formal parameter
TDFBOOLTDF encoding of a booleanTDFIDENTTDF encoding of a byte string
TDFINTTDF encoding of an integerTDFSTRINGTDF encoding of an n-bit byte string
TOKDECDeclaration of a TOKENTOKDEC_PROPSBody of UNIT contianing TOKDECs
TOKDEFDefinition of a TOKENTOKDEF_PROPSBody of UNIT contianing TOKDEFs
TOKEN_DEFNDefines TOKEN expansionTOKFORMALSSort and name for parameters in TOKEN_DEFN
UNIQUEWorld-wide nameUNITComponents of CAPSULE with LINKs to other UNITs
VERSIONVersion number of TDFVERSION_PROPSBody of UNIT contianing version information
Table 6. SORTs without SORTNAMEs
  1. [a]

    There are facilities to allow extensions to the number of constructors, so it is not quite as simple as this.


  1. 4.1. make_capsule and name-spaces
    1. 4.1.1. External linkages
    2. 4.1.2. UNITs
    3. 4.1.3. make_unit
    4. 4.1.4. LINK
  2. 4.2. Definitions and declarations
    1. 4.2.1. Scopes and linking
    2. 4.2.2. Declaration and definition signatures
    3. 4.2.3. STRING

A CAPSULE is typically the result of a single compilation - one could regard it as being the TDF analogue of a Unix .o file. Just as with .o files, a set of CAPSULEs can be linked together to form another. Similarly, a CAPSULE may be translated to make program for some platform, provided certain conditions are met. One of these conditions is obviously that a translator exists for the platform, but there are others. They basically state that any names that are undefined in the CAPSULE can be supplied by the system in which it is to be run. For example, the translator could produce assembly code with external identifiers which will be supplied by some system library.

4.1. make_capsule and name-spaces

The only constructor for a CAPSULE is make_capsule. Its basic function is to compose together UNITs which contain the declarations and definitions of the program. The signature of make_capsule looks rather daunting and is probably best represented graphically.

prop_names capsule_linking external_linkage groups CAPSULE Kinds of UNITs in "groups" "tagdecs" "tagdefs" Number of each kind of name in CAPSULE namespace "tag" "token" "al_tag" 5 6 7 Links of each kind of name in CAPSULE namespace to externals tag LINKEXTERNs token LINKEXTERNs al_tag LINKEXTERNs UNITs of the same kind grouped together in the order given by "prop_names" tagdec UNITs tagdef UNITs
Figure 7. make_capsule

The diagram gives an example of a CAPSULE using the same components as in the following text.

Each CAPSULE has its own name-space, distinct from all other CAPSULEs' name-spaces and also from the name-spaces of its component UNITs (see ). There are several different kinds of names in TDF and each name-space is further subdivided into one for each kind of name. The number of different kinds of names is potentially unlimited but only three are used in core-TDF, namely "tag", "token" and "al_tag". Those names in a "tag" name-space generally correspond to identifiers in normal programs and I shall use these as the paradigm for the properties of them all.

The actual representations of a "tag" name in a given name-space is an integer, described as SORT TDFINT. These integers are drawn from a contiguous set starting from 0 up to some limit given by the constructor which introduces the name-space. For CAPSULE name-spaces, this is given by the capsule_linking parameter of make_capsule:

capsule_linking: SLIST(CAPSULE_LINK)

In the most general case in core-TDF, there would be three entries in the list introducing limits using make_capsule_link for each of the "tag", "token" and "al_tag" name-spaces for the CAPSULE. Thus if:

capsule_linking = (make_capsule_link("tag", 5),
                   make_capsule_link("token", 6),
                   make_capsule_link("al_tag", 7))

there are 5 CAPSULE "tag" names used within the CAPSULE, namely 0, 1, 2, 3 and 4; similarly there are 6 "token" names and 7 "al_tag" names.

4.1.1. External linkages

The context of usage will always determine when and how an integer is to be interpreted as a name in a particular name-space. For example, a TAG in a UNIT is constructed by make_tag applied to a TDFINT which will be interpreted as a name from that UNIT's "tag" name-space. An integer representing a name in the CAPSULE name-space would be found in a LINKEXTERN of the external_linkage parameter of make_capsule.

external_linkage: SLIST(EXTERN_LINK)

Each EXTERN_LINK is itself formed from an SLIST of LINKEXTERNs given by make_extern_link . The order of the EXTERN_LINKs determines which name-space one is dealing with; they are in the same order as given by the extern_linkage parameter. Thus, with the extern_linkage given above, the first EXTERN_LINK would deal with the "tag" name-space; Each of its component LINKEXTERNs constructed by make_linkextern would be identifying a tag number with some name external to the CAPSULE; for example one might be:

make_linkextern(4, string_extern("printf"))

This would mean: identify the CAPSULE's "tag" 4 with an name called "printf", external to the module. The name "printf" would be used to linkage external to the CAPSULE; any name required outside the CAPSULE would have to be linked like this.

4.1.2. UNITs

This name "printf", of course, does not necessarily mean the C procedure in the system library. This depends both on the system context in which the CAPSULE is translated and also the meaning of the CAPSULE "tag" name 4 given by the component UNITs of the CAPSULE in the groups parameter of make_capsule:

groups: SLIST(GROUP)

Each GROUP in the groups SLIST will be formed by sets of UNITs of the same kind. Once again, there are a potentially unlimited number of kinds of UNITs but core-TDF only uses those named "tld","al_tagdefs", "tagdecs", "tagdefs", "tokdecs" and "tokdefs".[b] These names will appear (in the same order as in groups) in the prop_names parameter of make_capsule, one for each kind of UNIT appearing in the CAPSULE:

prop_names: SLIST(TDFIDENT)

Thus if:

prop_names = ("tagdecs", "tagdefs")

then, the first element of groups would contain only "tagdecs" UNITs and and the second would contain only "tagdefs" UNITs. A "tagdecs" UNIT contains things rather like a set of global identifier declarations in C, while a "tagdefs" UNIT is like a set of global definitions of identifiers.

4.1.3. make_unit

Now we come to the construction of UNITs using make_unit, as in the diagram below:

local_vars lks properties UNIT In order given by CAPSULE "capsule_linkage" "tag" "token" "al_tag" 5 6 7 Links between UNIT namespace and CAPSULE namespace tag LINKs token LINKs al_tag LINKs TAGDEF_PROPS BYTESTREAM no_labels tds 3 UNITs of the same kind grouped together in the order given by "prop_names" TAGDEFs of UNIT
Figure 8. make_unit

First we give the limits of the various name-spaces local to the UNIT in the local_vars parameter:

local_vars: SLIST(TDFINT)

Just in the same way as with external_linkage, the numbers in local_vars correspond (in the same order) to the spaces indicated in capsule_linking in section 3.1. With our example,the first element of local_vars gives the number of "tag" names local to the UNIT, the second gives the number of "token" names local to the UNIT etc. These will include all the names used in the body of the UNIT. Each declaration of a TAG, for example, will use a new number from the "tag" name-space; there is no hiding or reuse of names within a UNIT.

4.1.4. LINK

Connections between the CAPSULE name-spaces and the UNIT name-spaces are made by LINKs in the lks parameter of make_unit:


Once again, lks is effectively indexed by the kind of name-space a. Each LINKS is an SLIST of LINKs each of which which establish an identity between names in the CAPSULE name-space and names in the UNIT name-space. Thus if the first element of lks contains:

make_link(42, 4)

then, the UNIT "tag" 42 is identical to the CAPSULE "tag" 4.

Note that names from the CAPSULE name-space only arise in two places, LINKs and LINK_EXTERNs. Every other use of names are derived from some UNIT name-space.

4.2. Definitions and declarations

The encoding in the properties:BYTESTREAM parameter of a UNIT is a PROPS, for which there are five constructors corresponding to the kinds of UNITs in core-TDF, make_al_tagdefs, make_tagdecs, make_tagdefs, make_tokdefs and make_tokdecs. Each of these will declare or define names in the appropriate UNIT name-space which can be used by make_link in the UNIT's lks parameter as well as elsewhere in the properties parameter. The distinction between "declarations" and "definitions" is rather similar to C usage; a declaration provides the "type" of a name, while a definition gives its meaning. For tags, the "type" is the SORT SHAPE (see below). For tokens, the "type" is a SORTNAME constructed from the SORTNAMEs of the parameters and result of the TOKEN using token:

		result: SORTNAME

Taking make_tagdefs as a paradigm for PROPS, we have:

no_labels: TDFINT
tds:       SLIST(TAGDEF)
           -> TAGDEF_PROPS

The no_labels parameter introduces the size of yet another name-space local to the PROPS, this time for the LABELs used in the TAGDEFs. Each TAGDEF in tds will define a "tag" name in the UNIT's name-space. The order of these TAGDEFs is immaterial since the initialisations of the tags are values which can be solved at translate time, load time or as unordered dynamic initialisations.

There are three constructors for TAGDEFs, each with slightly different properties. The simplest is make_id_tagdef:

t:         TDFINT
signature: OPTION(STRING)
e:         EXP x
           -> TAGDEF

Here, t is the tag name and the evaluation of e will be the value of SHAPE x of an obtain_tag(t) in an EXP. Note that t is not a variable; the value of obtain_tag(t) will be invariant. The signature parameter gives a STRING (see section 3.2.3) which may be used as an name for the tag, external to TDF and also as a check introduced by the producer that a tagdef and its corresponding tagdec have the same notion of the language-specific type of the tag.

The two other constructors for TAGDEF, make_var_tagdef and common_tagdef both define variable tags and have the same signature:

t:          TDFINT
opt_access: OPTION(ACCESS)
signature:  OPTION(STRING)
e:          EXP x
            -> TAGDEF

Once again t is tag name but now e is initialisation of the variable t. A use of obtain_tag(t) will give a pointer to the variable (of SHAPE POINTER x), rather than its contents.[c] There can only be one make_var_tagdef of a given tag in a program, but there may be more than one common_tagdef, possibly with different initialisations; however these initialisations must overlap consistently just as in common blocks in FORTRAN.

The ACCESS parameter gives various properties required for the tag being defined and is discussed in section 5.3.2.

The initialisation EXPs of TAGDEFs will be evaluated before the "main" program is started. An initialiation EXP must either be a constant (in the sense of section 9) or reduce to (either directly or by token or _cond expansions) to an initial_value:

init: EXP s
      -> EXP s

The translator will arrange that init will be evaluated once only before any procedure application, other than those themselves involved in initial_values, but after any constant initialisations. The order of evaluation of different initial_values is arbitrary.

4.2.1. Scopes and linking

Only names introduced by AL_TAGDEFS, TAGDEFS, TAGDECs, TOKDECs and TOKDEFs can be used in other UNITs (and then, only via the lks parameters of the UNITs involved). You can regard them as being similar to C global declarations. Token definitions include their declarations implicitly; however this is not true of tags. This means that any CAPSULE which uses or defines a tag across UNITs must include a TAGDEC for that tag in its "tagdecs" UNITs. A TAGDEC is constructed using either make_id_tagdec, make_var_tagdec or common_tagdec, all with the same form:

t_intro:   TDFINT
acc:       OPTION(ACCESS)
signature: OPTION(STRING)
x:         SHAPE
           -> TAGDEC

Here the tagname is given by t_intro; the SHAPE x will defined the space and alignment required for the tag (this is analogous to the type in a C declaration). The acc field will define certain properties of the tag not implicit in its SHAPE; I shall return to the kinds of properties envisaged in discussing local declarations in section 5.3.

Most program will appear in the "tagdefs" UNITs - they will include the definitions of the procedures of the program which in turn will include local definitions of tags for the locals of the procedures.

The standard TDF linker allows one to link CAPSULEs together using the name identifications given in the LINKEXTERNs, perhaps hiding some of them in the final CAPSULE. It does this just by generating a new CAPSULE name-space, grouping together component UNITs of the same kind and replacing their lks parameters with values derived from the new CAPSULE name-space without changing the UNITs' name-spaces or their props parameters. The operation of grouping together UNITs is effectively assumed to be associative, commutative and idempotent e.g. if the same tag is declared in two capsules it is assumed to be the same thing . It also means that there is no implied order of evaluation of UNITs or of their component TAGDEFs

Different languages have different conventions for deciding how programs are actually run. For example, C requires the presence of a suitably defined "main" procedure; this is usually enforced by requiring the system ld utility to bind the name "main" along with the definitions of any library values required. Otherwise, the C conventions are met by standard TDF linking. Other languages have more stringent requirements. For example, C++ requires dynamic initialisation of globals, using initial_value. As the only runnable code in TDF is in procedures, C++ would probably require an additional linking phase to construct a "main" procedure which calls the initialisation procedures of each CAPSULE involved if the system linker did not provide suitable C++ linking.

4.2.2. Declaration and definition signatures

The signature arguments of TAGDEFs and TAGDECs are designed to allow a measure of cross-UNIT checking when linking independently compiled CAPSULEs. Suppose that we have a tag, t, used in one CAPSULE and defined in another; the first CAPSULE would have to have a TAGDEC for t whose TAGDEF is in the second. The signature STRING of both could be arranged to represent the language-specific type of t as understood at compilation-time. Clearly, when the CAPSULEs are linked the types must be identical and hence their STRING representation must be the same - a translator will reject any attempt to link definitions and declarations of the same object with different signatures.

Similar considerations apply to TOKDEFs and TOKDECs; the "type" of a TOKEN may not have any familiar analogue in most HLLs, but the principle remains the same.

4.2.3. STRING

The SORT STRING is used in various constructs other than declarations and definitions. It is a first-class SORT with string_apply_token and string_cond. A primitive STRING is constructed from a TDFSTRING(k,n) which is an encoding of n integers,each of k bits, using make_string:

arg: TDFSTRING(k, n)
     -> STRING(k, n)

STRINGs may be concatenated using concat_string:

arg1: STRING(k, n)
arg2: STRING(k,m)
      -> STRING(k, n + m)

Being able to compose strings, including token applications etc, means that late-binding is possible in signature checking in definitions and declarations. This late-binding means that the representation of platform-dependent HLL types need only be fully expanded at install-time and hence the types could be expressed in their representational form on the specific platform.

  1. [b]

    The "tld" UNITs gives usage information for names to aid the linker, tld, to discover which namess have definitions and some usage information. The C producer also optionally constructs "diagnostics" UNITs (to give run-time diagnostic information).

  2. [c]

    There is a similar distinction between tags introduced to be locals of a procedure using identify and variable (see section 5.3.1).


  1. 5.1. Shapes
    1. 5.1.1. TOP, BOTTOM, LUB
    2. 5.1.2. INTEGER
    3. 5.1.3. FLOATING and complex
    4. 5.1.4. BITFIELD
    5. 5.1.5. PROC
    6. 5.1.6. Non-primitive SHAPEs
  2. 5.2. Alignments
    1. 5.2.1. ALIGNMENT constructors
    2. 5.2.2. Special alignments
    3. 5.2.3. AL_TAG, make_al_tagdef
  3. 5.3. Pointer and offset SHAPEs
    1. 5.3.1. OFFSET
  4. 5.4. Compound SHAPEs
    1. 5.4.1. Offset arithmetic with compound shapes
    2. 5.4.2. offset_mult
    3. 5.4.3. OFFSET ordering and representation
  5. 5.5. BITFIELD alignments

In most languages there is some notion of the type of a value. This is often an uncomfortable mix of a definition of a representation for the value and a means of choosing which operators are applicable to the value. The TDF analogue of the type of value is its SHAPE (S3.20). A SHAPE is only concerned with the representation of a value, being an abstraction of its size and alignment properties. Clearly an architecture-independent representation of a program cannot say, for example, that a pointer is 32 bits long; the size of pointers has to be abstracted so that translations to particular architectures can choose the size that is apposite for the platform.

5.1. Shapes

There are ten different basic constructors for the SORT SHAPE from bitfield to top as shown in table 3. SHAPEs arising from those constructors are used as qualifiers (just using an upper case version of the constructor name) to various SORTs in the definition; for example, EXP TOP is an expression with top SHAPE. This is just used for definitional purposes only; there is no SORT SHAPENAME as one has SORTNAME.

BITFIELD(v)BITFIELD_VARIETYAs in C bitfields, e.g. int x:5{v}
BOTTOMIt never gets here, e.g. goto(none)
COMPOUND(sz)OFFSET(xy)Structs or unions; OFFSET sz is sizex ⊇ set-union of field alignments
FLOATING(fv)FLOATING_VARIETYFloating point numbers{fv}
INTEGER(v)VARIETYIntegers, including chars{v}
NOF(ns)(NAT, SHAPE)Tuple of n values of SHAPE s{alignment(s)}
OFFSET(a1a2)(ALIGNMENT, ALIGNMENT)Offsets in memory; a1a2{offset}
POINTER(a)ALIGNMENTPointers in memory{pointer}
PROCProcedure values{proc}
TOPNo value; e.g. the result of assign{ }

In the TDF specification of EXPs, you will observe that all EXPs in constructor signatures are all qualified by the SHAPE name; for example, a parameter might be EXP INTEGER(v). This merely means that for the construct to be meaningful the parameter must be derived from a constructor defined to be an EXP INTEGER(v). You might be forgiven for assuming that TDF is hence strongly-typed by its SHAPEs. This is not true; the producer must get it right. There are some checks in translators, but these are not exhaustive and are more for the benefit of translator writers than for the user. A tool for testing the SHAPE correctness of a TDF program would be useful but has yet to be written.


Two of the SHAPE constructions are rather specialised; these are TOP and BOTTOM. The result of any expression with a TOP shape will always be discarded; examples are those produced by assign and integer_test . A BOTTOM SHAPE is produced by an expression which will leave the current flow of control e.g. goto . The significance of these SHAPEs only really impinges on the computation of the shapes of constructs which have alternative expressions as results. For example, the result of conditional is the result of one of its component expressions. In this case, the SHAPE of the result is described as the LUB of the SHAPEs of the components. This simply means that if one of the component SHAPEs is TOP then the resulting SHAPE is TOP; if one is BOTTOM then the resulting SHAPE is the SHAPE of the other; otherwise both component SHAPEs must be equal and is the resulting SHAPE. Since this operation is associative, commutative and idempotent, we can speak quite unambiguously of the LUB of several SHAPEs.

5.1.2. INTEGER

Integer values in TDF have shape INTEGER(v) where v is of SORT VARIETY. The constructor for this SHAPE is integer with a VARIETY parameter. The basic constructor for VARIETY is var_limits which has a pair of signed natural numbers as parameters giving the limits of possible values that the integer can attain. The SHAPE required for a 32 bit signed integer would be:

integer(var_limits(-231, 231 - 1))

while an unsigned char is:

integer(var_limits(0, 255))

A translator should represent each integer variety by an object big enough (or bigger) to contain all the possible values with limits of the VARIETY. That being said, I must confess that most current translators do not handle integers of more than the maximum given naturally by the target architecture, but this will be rectified in due course.

The other way of constructing a VARIETY is to specify the number of bits required for its 2s-complemennt representation using var_width:

signed_width: BOOL
width:        NAT

5.1.3. FLOATING and complex

Similarly, floating point and complex numbers have shape FLOATING qualified by a FLOATING_VARIETY.

A FLOATING_VARIETY for a real number is constructed using fvar_parms:

base:              NAT
mantissa_digits:   NAT
minimum_exponent:  NAT
maximum_exponent:: NAT

A FLOATING_VARIETY specifies the base, number of mantissa digits, and maximum and minimum exponent. Once again, it is intended that the translator will choose a representation which will contain all possible values, but in practice only those which are included in IEEE float, double and extended are actually implemented.

Complex numbers have a floating variety constructed by complex_parms which has the the same signature as fvar_parms. The representation of these numbers is likely to be a pair of real numbers each defined as if by fvar_parms with the same arguments. The real and imaginary parts of of a complex number can be extracted using real_part and imaginary_part; these could have been injected ito the complex number using make_complex or any of the complex operations. Many translators will simply transform complex numbers into COMPOUNDs consisting of two floating point numbers, transforming the complex operations into floating point operations on the fields.


A number of contiguous bits have shape BITFIELD, qualified by a BITFIELD_VARIETY (S3.4) which gives the number of bits involved and whether these bits are to be treated as signed or unsigned integers. Current translators put a maximum of 32 or 64 on the number of bits.

5.1.5. PROC

The representational SHAPEs of procedure values is given by PROC with constructor proc . I shall return to this in the description of the operations which use it.

5.1.6. Non-primitive SHAPEs

The construction of the other four SHAPEs involves either existing SHAPEs or the alignments of existing SHAPEs. These are constructed by compound, nof, offset and pointer. Before describing these, we require a digression into what is meant by alignments and offsets.

5.2. Alignments

In most processor architectures there are limitations on how one can address particular kinds of objects in convenient ways. These limitations are usually defined as part of the ABI for the processor. For example, in the MIPs processor the fastest way to access a 32-bit integer is to ensure that the address of the integer is aligned on a 4-byte boundary in the address space; obviously one can extract a mis-aligned integer but not in one machine instruction. Similarly, 16-bit integers should be aligned on a 2-byte boundary. In principle, each primitive object could have similar restrictions for efficient access and these restrictions could vary from platform to platform. Hence, the notion of alignment has to be abstracted to form part of the architecture independent TDF - we cannot assume that any particular alignment regime will hold universally.

The abstraction of alignments clearly has to cover compound objects as well as primitive ones like integers. For example, if a field of structure in C is to be accessed efficiently, then the alignment of the field will influence the alignment of the structure as whole; the structure itself could be a component of a larger object whose alignment must then depend on the alignment of the structure and so on. In general, we find that a compound alignment is given by the maximum alignment of its components, regardless of the form of the compound object e.g. whether it is a structure, union, array or whatever.

This gives an immediate handle on the abstraction of the alignment of a compound object - it is just the set of abstractions of the alignments of its components. Since "maximum" is associative, commutative and idempotent, the component sets can be combined using normal set-union rules. In other words, a compound alignment is abstracted as the set of alignments of the primitive objects which make up the compound object. Thus the alignment abstraction of a C structure with only float fields is the singleton set containing the alignment of a float while that of a C union of an int and this structure is a pair of the alignments of an int and a float.

5.2.1. ALIGNMENT constructors

The TDF abstraction of an alignment has SORT ALIGNMENT. The constructor, unite_alignments, gives the set-union of its ALIGNMENT parameters; this would correspond to taking a maximum of two real alignments in the translator.

The constructor, alignment, gives the ALIGNMENT of a given SHAPE according to the rules given in the definition. These rules effectively define the primitive ALIGNMENTs as in the ALIGNMENT column of table 3. Those for PROC, all OFFSETs and all POINTERs are constants regardless of any SHAPE qualifiers. Each of the INTERGER VARIETYs, each of the FLOATING VARIETYs and each of the BITFIELD VARIETYs have their own ALIGNMENTs. These ALIGNMENTs will be bound to values apposite to the particular platform at translate-time. The ALIGNMENT of TOP is conventionally taken to be the empty set of ALIGNMENTs (corresponding to the minimum alignment on the platform).

The alignment of a procedure parameter clearly has to include the alignment of its SHAPE; however, most ABIs will mandate a greater alignment for some SHAPEs e.g. the alignment of a byte parameter is usually defined to be on a 32-bit rather than an 8-bit boundary. The constructor, parameter_alignment, gives the ALIGNMENT of a parameter of given SHAPE.

5.2.2. Special alignments

There are several other special ALIGNMENTs.

The alignment of a code address is {code} given by code_alignment; this will be the alignment of a pointer given by make_local_lv giving the value of a label.

The other special ALIGNMENTs are considered to include all of the others, but remain distinct. They are all concerned with offsets and pointers relevant to procedure frames, procedure parameters and local allocations and are collectively known as frame alignments. These frame alignments differ from the normal alignments in that their mapping to a given architecture is rather more than just saying that it describes some n-bit boundary. For example, alloca_alignment describes the alignment of dynamic space produced by local_alloc (roughly the C alloca). Now, an ABI could specify that the alloca space is a stack disjoint from the normal procedure stack; thus manipulations of space at alloca_alignment may involve different code to space generated in other ways.

Similar considerations apply to the other special alignments, callees_alignment(b), callers_alignment(b) and locals_alignment. The first two give the alignments of the bases of the two different parameter spaces in procedures (q.v.) and locals_alignment gives the alignment of the base of locally declared tags within a procedure. The exact interpretation of these depends on how the frame stack is defined in the target ABI, e.g. does the stack grow downwards or upwards?

The final special alignment is var_param_alignment. This describes the alignment of a special kind of parameter to a procedure which can be of arbitrary length (see section 5.1.1).

5.2.3. AL_TAG, make_al_tagdef

Alignments can also be named as AL_TAGs using make_al_tagdef. There is no corresponding make_al_tagdec since AL_TAGs are implicitly declared by their constructor, make_al_tag. The main reason for having names for alignments is to allow one to resolve the ALIGNMENTs of recursive data structures. If, for example, we have mutually recursive structures, their ALIGNMENTs are best named and given as a set of equations formed by AL_TAGDEFs. A translator can then solve these equations trivially by substitution; this is easy because the only significant operation is set-union.

5.3. Pointer and offset SHAPEs

A pointer value must have a form which reflects the alignment of the object that it points to; for example, in the MIPs processor, the bottom two bits of a pointer to an integer must be zero. The TDF SHAPE for a pointer is POINTER qualified by the ALIGNMENT of the object pointed to. The constructor pointer uses this alignment to make a POINTER SHAPE.

5.3.1. OFFSET

Expressions which give sizes or offsets in TDF have an OFFSET SHAPE. These are always described as the difference between two pointers. Since the alignments of the objects pointed to could be different, an OFFSET is qualified by these two ALIGNMENTs. Thus an EXP OFFSET(X,Y) is the difference between an EXP POINTER(X) and an EXP POINTER(Y). In order for the alignment rules to apply, the set X of alignments must include Y. The constructor offset uses two such alignments to make an OFFSET SHAPE. However, many instances of offsets will be produced implicitly by the offset arithmetic, e.g., offset_pad:

arg1: EXP OFFSET(z, t)
-> EXP OFFSET(z xc8 a, a)

This gives the next OFFSET greater or equal to arg1 at which an object of ALIGNMENT a can be placed. It should be noted that the calculation of shapes and alignments are all translate-time activities; only EXPs should produce runnable code. This code, of course, may depend on the shapes and alignments involved; for example, offset_pad might round up arg1 to be a multiple of four bytes if a was an integer ALIGNMENT and z was a character ALIGNMENT. Translators also do extensive constant analysis, so if arg1 was a constant offset, then the round-off would be done at translate-time to produce another constant.

5.4. Compound SHAPEs

The alignments of compound SHAPEs (i.e. those arising from the constructors compound and nof) are derived from the constructions which produced the SHAPE. To take the easy one first, the constructor nof has signature:

n: NAT

This SHAPE describes an array of n values all of SHAPE s; note that n is a natural number and hence is a constant known to the producer. Throughout the definition this is referred to as the SHAPE NOF(n, s). The ALIGNMENT of such a value is alignment(s); i.e. the alignment of an array is just the alignment of its elements.

The other compound SHAPEs are produced using compound:

sz: EXP OFFSET(x, y)

The sz parameter gives the minimum size which can accommodate the SHAPE.

5.4.1. Offset arithmetic with compound shapes

The constructors offset_add, offset_zero and shape_offset are used together with offset_pad to implement (inter alia) selection from structures represented by COMPOUND SHAPEs. Starting from the zero OFFSET given by offset_zero, one can construct an EXP which is the offset of a field by padding and adding offsets until the required field is reached. The value of the field required could then be extracted using component or add_to_ptr. Most producers would define a TOKEN for the EXP OFFSET of each field of a structure or union used in the program simply to reduce the size of the TDF.

The SHAPE of a C structure consisting of an char followed by an int would require x to be the set consisting of two INTEGER VARIETYs, one for int and one for char, and sz would probably have been constructed like:

sz = offset_add(offset_pad(int_al, shape_offset(char)), shape_offset(int))

The various rules for the ALIGNMENT qualifiers of the OFFSETs give the required SHAPE; these rules also ensure that offset arithmetic can be implemented simply using integer arithmetic for standard architectures (see section 13.1). Note that the OFFSET computed here is the minimum size for the SHAPE. This would not in general be the same as the difference between successive elements of an array of these structures which would have SHAPE OFFSET(x, x) as produced by offset_pad(x, sz). For examples of the use of OFFSETs to access and create structures, see section 12.

5.4.2. offset_mult

In C, all structures have size known at translate-time. This means that OFFSETs for all field selections of structures and unions are translate-time constants; there is never any need to produce code to compute these sizes and offsets. Other languages (notably Ada) do have variable size structures and so sizes and offsets within these structures may have to be computed dynamically. Indexing in C will require the computation of dynamic OFFSETs; this would usually be done by using offset_mult to multiply an offset expression representing the stride by an integer expression giving the index:

arg1: EXP OFFSET(x, x)
arg2: EXP INTEGER(v)
-> EXP OFFSET(x, x)

and using add_to_ptr with a pointer expression giving the base of the array with the resulting OFFSET.

5.4.3. OFFSET ordering and representation

There is an ordering defined on OFFSETs with the same alignment qualifiers, as given by offset_test and offset_max having properties like:

shape_offset(S) xb3  offset_zero(alignment(S))
A xb3  B        iff offset_max(A,B) = A
offset_add(A, B) xb3  A         where B xb3  offset_zero(some compatible alignment)

In most machines, OFFSETs would be represented as single integer values with the OFFSET ordering corresponding to simple integer ordering. The offset_add constructor just translates to simple addition with offset_zero as 0 with similar correspondences for the other offset constructors. You might well ask why TDF does not simply use integers for offsets, instead of introducing the rather complex OFFSET SHAPE. The reasons are two-fold. First, following the OFFSET arithmetic rules concerned with the ALIGNMENT qualifiers will ensure that one never extracts a value from a pointer with the wrong alignment by, for example, applying contents to an add_to_pointer. This frees TDF from having to define the effect of strange operations like forming a float by taking the contents of a pointer to a character which may be mis-aligned with respect to floats - a heavy operation on most processors. The second reason is quite simple; there are machines which cannot represent OFFSETs by a single integer value.

The iAPX-432 is a fairly extreme example of such a machine; it is a "capability" machine which must segregate pointer values and non-pointer values into different spaces. On this machine a value of SHAPE POINTER({pointer, int}) (e.g. a pointer to a structure containing both integers and pointers) could have two components; one referring to the pointers and another to the integers. In general, offsets from this pointer would also have two components, one to pick out any pointer values and the other the integer values. This would obviously be the case if the original POINTER referred to an array of structures containing both pointers and integers; an offset to an element of the array would have SHAPE OFFSET({pointer, int}, {pointer, int}); both elements of the offset would have to be used as displacements to the corresponding elements of the pointer to extract the structure element. The OFFSET ordering is now given by the comparison of both displacements. Using this method, one finds that pointers in store to non-pointer alignments are two words in different blocks and pointers to pointer-alignments are four words, two in one block and two in another. This sounds a very unwieldy machine compared to normal machines with linear addressing. However, who knows what similar strange machines will appear in future; the basic conflicts between security, integrity and flexibility that the iAPX-432 sought to resolve are still with us. For more on the modelling of pointers and offsets see section 13.

5.5. BITFIELD alignments

Even in standard machines, one finds that the size of a pointer may depend on the alignment of the data pointed at. Most machines do not allow one to construct pointers to bits with the same facility as other alignments. This usually means that pointers in memory to BITFIELD VARIETYs must be implemented as two words with an address and bit displacement. One might imagine that a translator could implement BITFIELD alignments so that they are the same as the smallest natural alignment of the machine and avoid the bit displacement, but this is not the intention of the definition. On any machine for which it is meaningful, the alignment of a BITFIELD must be one bit; in other words successive BITFIELDs are butted together with no padding bits. [d] Within the limits of what one can extract from BITFIELDs, namely INTEGER VARIETYs, this is how one should implement non-standard alignments, perhaps in constructing data, such as protocols, for exchange between machines. One could implement some Ada representational statements in this way; certainly the most commonly used ones.

TDF Version 3.0 does not allow one to construct a pointer of SHAPE POINTER(b) where b consists entirely of bitfield alignments; this relieves the translators of the burden of doing general bit-addressing. Of course, this simply shifts the burden to the producer. If the high level language requires to construct a pointer to an arbitrary bit position, then the producer is required to represent such a pointer as a pair consisting of pointer to some alignment including the required bitfield and an offset from this alignment to the bitfield. For example, Ada may require the construction of a pointer to a boolean for use as the parameter to a procedure; the SHAPE of the rep resentation of this Ada pointer could be a COMPOUND formed from a POINTER({x,b}) and an OFFSET({x, b}, b) where b is the alignment given by a 1 bit alignment. To access the boolean, the producer would use the elements of this pair as arguements to bitfield_assign and bitfield_contents (Assigning and extracting bitfields.).

  1. [d]

    Note that is not generally true for C bitfields; most C ABIs have (different) rules for putting in padding bits depending on the size of the bitfield and its relation with the natural alignments. This is a fruitful source of errors in data exchange between different C ABIs For more on similar limitations of bitfields in TDF (see Assigning and extracting bitfields).

6. Procedures and Locals

  1. 6.1. make_proc and apply_proc
    1. 6.1.1. vartag, varparam
  2. 6.2. make_general_proc and apply_general_proc
    1. 6.2.1. tail_call
    2. 6.2.2. PROCPROPS
  3. 6.3. Defining and using locals
    1. 6.3.1. identify, variable
    2. 6.3.2. ACCESS
    3. 6.3.3. current_env, env_offset
    4. 6.3.4. local_alloc, local_free_all, last_local
  4. 6.4. Heap storage

All procedures in TDF are essentially global; the only values which are accessible from the body of a procedure are those which are derived from global TAGs (introduced by TAGDEFs or TAGDECs), local TAGs defined within the procedure and parameter TAGs of the procedure.

All executable code in TDF will arise from an EXP PROC made by either make_proc or make_general_proc. They differ in their treatment of how space for the actual parameters of a call is managed; in particular, is it the caller or the callee which deallocates the parameter space?

With make_proc, this management is conceptually done by the caller at an apply_proc; i.e. the normal C situation. This suffers from the limitation that tail-calls of procedures are then only possible in restricted circumstances (e.g. the space for the parameters of the tail-call must be capable of being included in caller's parameters) and could only be implemented as an optimisation within a translator. A producer could not predict these circumstances in a machine independent manner, whether or not it knew that a tail-call was valid.

An alternative would be to make the management of parameter space the responsibility of the called procedure. Rather than do this, make_general_proc (and apply_general_proc) splits the parameters into two sets, one whose allocation is the responsibility of the caller and the other whose allocation is dealt with by the callee. This allows an explicit tail_call to be made to a procedure with new callee parameters; the caller parameters for the tail_call will be the same as (or some initial subset of) the caller parameters of the procedure containing the tail_call .

A further refinement of make_general_proc is to allow access to the caller parameter space in a postlude at the call of the procedure using an apply_general_proc. This allows simple implementations of Ada out_parameters, or more generally, multiple results of procedures.

6.1. make_proc and apply_proc

The make_proc constructor has signature:

result_shape: SHAPE
params_intro: LIST(TAGSHACC)
var_intro:    OPTION(TAGACC)
body:         EXP BOTTOM
              -> EXP PROC

The params_intro and var_intro parameters introduce the formal parameters of the procedure which may be used in body. The procedure result will have SHAPE result_shape and will be usually given by some return construction within body. The basic model is that space will be provided to copy actual parameters (into space supplied by some apply_proc) by value into these formals and the body will treat this space effectively as local variables.

Each straightforward formal parameter is introduced by an auxiliary SORT TAGSHACC using make_tagshacc:

sha:        SHAPE
opt_access: OPTION(LIST(ACCESS))
tg_intro:   TAG POINTER(alignment(sha))
            -> TAGSHACC

Within body, the formal will be accessed using tg_intro; it is always considered to be a pointer to the space of SHAPE sha allocated by apply_proc, hence the pointer SHAPE.

For example, if we had a simple procedure with one integer parameter, var_intro would be empty and params_intro might be:

params_intro = make_tagshacc(integer(v), empty, make_tag(13))

Then, TAG 13 from the enclosing UNIT's name-space is identified with the formal parameter with SHAPE POINTER(INTEGER(v)). Any use of obtain_tag(make_tag(13)) in body will deliver a pointer to the integer parameter. I shall return to the meaning of opt_access and the ramifications of the scope and extent of TAGs involved in conjunction with local declarations in section 5.3.1.

Procedures, whether defined by make_proc or make_general_proc, will usually terminate and deliver its result with a return:

arg1: EXP x

Here x must be identical to the result_shape of the call of the procedure There may be several returns in body; and the SHAPE x in each will be the same. Some languages allow different types to be returned depending on the particular call. The producer must resolve this issue. For example, C allows one to deliver void if the resulting value is not used. In TDF a dummy value must be provided at the return; for example make_value(result_shape).

Note that the body has SHAPE bottom since all possible terminations to a procedure have SHAPE BOTTOM..

Procedures defined by make_proc are called using apply_proc:

result_shape: SHAPE
arg1:         EXP PROC
arg2:         LIST(EXP)
varparam:     OPTION(EXP)
              -> EXP result_shape

Here arg1 is the procedure to be called and arg2 gives the actual parameters. There must be at least as many actual parameters as given (with the same SHAPE) in the params_intro of the corresponding make_proc for arg1. [e] The values of arg2 will be copied into space managed by caller.

The SHAPE of the result of the call is given by result_shape which must be identical to the result_shape of the make_proc.

6.1.1. vartag, varparam

Use of the var_intro OPTION in make_proc and the corresponding varparam in apply_proc allows one to have a parameter of any SHAPE, possibly differing from call to call where the actual SHAPE can be deduced in some way by the body of the make_proc . One supplies an extra actual parameter, varparam, which usually would be a structure grouping some set of values. The body of the procedure can then access these values using the pointer given by the TAG var_intro, using add_to_ptr with some computed offsets to pick out the individual fields.

This is a slightly different method of giving a variable number of parameters to a procedure, rather than simply giving more actuals than formals. The principle difference is in the alignment of the components of varparam; these will be laid out according to the default padding defined by the component shapes. In most ABIs, this padding is usually different to the way parameters are laid out; for example, character parameters are generally padded out to a full word. Thus a sequence of parameters of given shape has a different layout in store to the same sequence of shapes in a structure. If one wished to pass an arbitrary structure to a procedure, one would use the varparam option rather passing the fields individually as extra actual parameters.

6.2. make_general_proc and apply_general_proc

A make_general_proc has signature:

result_shape: SHAPE
prcprops:     OPTION(PROCPROPS)
caller_intro: LIST(TAGSHACC)
callee_intro: LIST(TAGSHACC)
body:         EXP BOTTOM
              -> EXP PROC

Here the formal parameters are split into two sets, caller_intro and callee_intro, each given by a list of TAGSHACCs just as in make_proc. The distinction between the two sets is that the make_general_proc is responsible for de_allocating any space required for the callee parameter set; this really only becomes obvious at uses of tail_call within body.

The result_shape and body have the same general properties as in make_proc. In addition prcprops gives other information both about body and the way that that the procedure is called. PROCPROPS are a set drawn from check_stack, inline, no_long_jump_dest, untidy, var_callees and var_callers. The set is composed using add_procprops. The PROCPROPS no_long_jump_dest is a property of body only; it indicates that none of the labels within body will be the target of a long_jump construct. The other properties should also be given consistently at all calls of the procedure; theu are discussed in section 5.2.2.

A procedure, p, constructed by make_general_proc is called using apply_general_proc:

result_shape:  SHAPE
prcprops:      OPTION(PROCPROPS)
p:             EXP PROC
caller_params: LIST(OTAGEXP)
callee_params: CALLEES
postlude:      EXP TOP
               -> EXP result_shape

The actual caller parameters are given by caller_params as a list of OTAGEXPs constructed using make_otagexp:

tgopt: OPTION(TAG x / i>)
e:     EXP x / i>
       -> OTAGEXP

Here, e is the value of the parameter and tgopt, if present, is a TAG which will bound to the final value of the parameter (after body is evaluated) in the postlude expression of the apply_general_proc. [f] Clearly, this allows one to use a caller parameter as an extra result of the procedure; for example, as in Ada out-parameters.

The actual callee_params may be constructed in three different ways. The usual method is to use make_callee_list, giving a list of actual EXP parameters, corresponding to the caller_intro list in the obvious way.The constructor, same_callees allows one to use the callees of the current procedure as the callees of the call; this, of course, assumes that the formals of the current procedure are compatible with the formals required for the call The final method allows one to construct a dynamically sized set of CALLEES; make_dynamic_callees takes a pointer and a size (expressed as an OFFSET) to make the CALLEES; this will be used in conjunction with a var_callees PROCPROPS (see section 5.2.2).

Some procedures can be expressed using either make_proc or make_general_proc. For example:

make_proc(S, L, empty, B) = make_general_proc(S, var_callers, L, empty, B)

6.2.1. tail_call

Often the result of a procedure, f, is simply given by the call of another (or the same) procedure, g. In appropriate circumstances, the same stack space can be used for the call of g as the call of f. This can be particularly important where heavily recursive routines are involved; some languages even use tail recursion as the preferred method of looping.

One condition for such a tail call to be applicable is knowing that g does not require any pointers to locals of f; this is often implicit in the language involved. Equally important is that the action on the return from f is indistiguishable from the return from g. For example, if it were the callers responsibility to pop the the space for the parameters on return from a call, then the tail call of g would only work if g had the same parameter space as f.

This is the justification for splitting the parameter set of a general proc; it is (at least conceptually) the caller's responsibility for popping the caller-parameters only - the callee-parameters are dealt with by the procedure itself. Hence we can define tail_call which uses the same caller-parameters, but a different set of callee-parameters:

prcprops:      OPTION(PROCPROPS)
p:             EXP PROC
callee_params: CALLEES
               -> EXP BOTTOM

The procedure p will be called with the same caller parameters as the current procedure and the new callee_params and return to the call site of the current procedure. Semantically, if S is the return SHAPE of the current procedure, and L is its caller-parameters:

tail_call(P, p, C) = return(apply_general_proc(S, P, p, L, C, make_top()))

However an implementation is expected to conserve stack by using the same space for the call of p as the current procedure.


The presence of var_callees (or var_callers) means that the procedure can be called with more actual callee (or caller) parameters than are indicated in callee_intro (or caller_intro). These extra parameters would be accessed within body using offset calculations with respect to the named parameters. The offsets should be calculated using parameter_alignment to give the packing of the parameter packs.

The presence of untidy means that body may be terminated by an untidy_return. This returns the result of the procedure as in return, but the lifetime of the local space of the procedure is extended (in practice this is performed by not returning the stack to its original value at the call). A procedure containing an untidy_return is a generalisation of a local_alloc(see section 5.3.4). For example the procedure could do some complicated local allocation (a triangular array, say) and untidily return a pointer to it so that the space is still valid in the calling procedure. The space will remain valid for the lifetime of the calling procedure unless some local_free is called within it, just as if the space had been generated by a local_alloc in the calling procedure.

The presence of inline is just a hint to the translator that the procedure body is a good candidate for inlining at the call.

The presence of check_stack means that the static stack requirements of the procedure will be checked on entry to see that they do not exceed the limits imposed by set_stack_limit; if they are exceeded a TDF exception with ERROR_CODE stack_overflow (see section 6.3) will be raised.

6.3. Defining and using locals

6.3.1. identify, variable

Local definitions within the body of a procedure are given by two EXP constructors which permit one to give names to values over a scope given by the definition. Note that this is somewhat different to declarations in standard languages where the declaration is usually embedded in a larger construct which defines the scope of the name; here the scope is explicit in the definition. The reason for this will become more obvious in the discussion of TDF transformations. The simpler constructor is identify:

opt_access: OPTION(ACCESS)
name_intro: TAG x
definition: EXP x
body:       EXP y
            -> EXP y

The definition is evaluated and its result is identified with the TAG given by name_intro within its scope body. Hence the use of any obtain_tag(name_intro) within body is equivalent to using this result. Anywhere else, obtain_tag(name_intro) is meaningless, including in other procedures.

The other kind of local definition is variable:

opt_access: OPTION(ACCESS)
name_intro: TAG x
init:       EXP x
body:       EXP y
            -> EXP y

Here the init EXP is evaluated and its result serves as an initialisation of space of SHAPE x local to the procedure. The TAG name_intro is then identified with a pointer to that SPACE within body. A use of obtain_tag(name_intro) within body is equivalent to using this pointer and is meaningless outside body or in other procedures. Many variable declarations in programs are uninitialised; in this case, the init argument could be provided by make_value which will produce some value with SHAPE given by its parameter.

6.3.2. ACCESS

The ACCESS SORT given in tag declarations is a way of describing a list of properties to be associated with the tag. They are basically divided into two classes, one which describes global properties of the tag with respect to the model for locals and the other which gives "hints" on how the value will be used. Any of these can be combined using add_access. Locals model

At the moment there are just three possibilities in the first class of ACCESS constructors. They are standard_access (the default), visible, out_par and long_jump_access.

The basic model used for the locals and parameters of a procedure is a frame within a stack of nested procedure calls. One could implement a procedure by allocating space according to SHAPEs of all of the parameter and local TAGs so that the corresponding values are at fixed offsets either from the start of the frame or some pointer within it.

Indeed, if the ACCESS opt_access parameter in a TAG definition is produced by visible, then a translator is almost bound to do just that for that TAG. This is because it allows for the possibility of the value to be accessed in some way other than by using obtain_tag which is the standard way of recovering the value bound to the TAG. The principal way that this could happen within TDF is by the combined use of env_offset to give the offset and current_env to give a pointer to the current frame (see section 5.3.3).

The out_par ACCESS is only applicable to caller parameters of procedures; it indicates that the value of the TAG concerned will accessed by the postlude part of an apply_general_proc. Hence, the value of the parameter must be accessible after the call; usually this will be on the stack in the callers frame.

The long_jump_access flag is used to indicate that the tag must be available after a long_jump. In practice, if either visible or long_jump_access is set, most translators would allocate the space for the declaration on the main-store stack rather than in an available register. If it is not set, then a translator is free to use its own criteria for whether space which can fit into a register is allocated on the stack or in a register, provided there is no observable difference (other than time or program size) between the two possibilities.

Some of these criteria are rather obvious; for example, if a pointer to local variable is passed outside the procedure in an opaque manner, then it is highly unlikely that one can allocate the variable in a register. Some might be less obvious. If the only uses of a TAG t was in obtain_tag(t)s which are operands of contents or the left-hand operands of assigns, most ABIs would allow the tag to be placed in a register. We do not necessarily have to generate a pointer value if it can be subsumed by the operations available. Access "hints"

A variable tag with ACCESS constant is a write-once value; once it is initialised the variable will always contain the initialisation. In other words the tag is a pointer to a constant value; translators can use this information to apply various optimisations.

A POINTER tag with ACCESS no_other_read or no_other_write is asserting that there are no "aliassed" accesses to the contents of the pointer. For example, when applied to a parameter of a procedure, it is saying that the original pointer of the tag is distinct from any other tags used (reading/writing) in the lifetime of the tag. These other tags could either be further parameters of the procedure or globals. Clearly, this is useful for describing the limitations imposed by Fortran parameters, for example.

6.3.3. current_env, env_offset

The constructor current_env gives a pointer to the current procedure frame of SHAPE POINTER(fa) where fa is depends on how the procedure was defined and will be some set of the special frame ALIGNMENTs. This set will always include locals_alignment - the alignment of any locals defined within the procedure. If the procedure has any caller- parameters, the set will also include callers_alignment(b) where b indicates whether there can be a variable number of them; similarly for callee-parameters.

Offsets from the current_env of a procedure to a tag declared in the procedure are constructed by env_offset:

t:  TAG x
    -> EXP OFFSET(fa, y)

The frame ALIGNMENT fa will be the appropriate one for the TAG t; i.e. if t is a local then the fa will be locals_alignment; if t is a caller parameter, fa will be callers_alignment(b); if t is a callee_parameter, fa will be callees_alignment(b). The alignment y will be the alignment of the initialisation of t.

The offset arithmetic operations allow one to access the values of tags non-locally using values derived from current_env and env_offset. They are effectively defined by the following identities:

If TAG t is derived from a variable definition
            add_to_ptr(current_env(), env_offset(locals_alignment, A, t)) = obtain_tag(t)
    if TAG t is derived from an identify definition:
            contents(S, add_to_ptr(current_env(), env_offset(locals_alignment, A, t))) = obtain_tag(t)
    if TAG t is derived from a caller parameter:
            add_to_ptr(current_env(), env_offset(callers_alignment(b), A, t)) = obtain_tag(t)
    if TAG t is derived from a callee parameter:
            add_to_ptr(current_env(), env_offset(callees_alignment(b), A, t)) = obtain_tag(t)

These identities are valid throughout the extent of t, including in inner procedure calls. In other words, one can dynamically create a pointer to the value by composing current_env and env_offset.

The importance of this is that env_offset(t) is a constant OFFSET and can be used anywhere within the enclosing UNIT, in other procedures or as part of constant TAGDEF; remember that the TDFINT underlying t is unique within the UNIT. The result of a current_env could be passed to another procedure (as a parameter, say) and this new procedure could then access a local of the original by using its env_offset. This would be the method one would use to access non-local, non-global identifiers in a language which allowed one to define procedures within procedures such as Pascal or Algol. Of course, given the stack-based model, the value given by current_env becomes meaningless once the procedure in which it is invoked is exited.

6.3.4. local_alloc, local_free_all, last_local

The size of stack frame produced by variable and identify definitions is a translate-time constant since the frame is composed of values whose SHAPEs are known. TDF also allows one to produce dynamically sized local objects which are conceptually part of the frame. These are produced by local_alloc:

arg1: EXP OFFSET(x, y)
      -> EXP POINTER(alloca_alignment)

The operand arg1 gives the size of the new object required and the result is a pointer to the space for this object "on top of the stack" as part of the frame. The quotation marks indicate that a translator writer might prefer to maintain a dynamic stack as well as static one. There are some disadvantages in putting everything into one stack which may well out-weigh the trouble of maintaining another stack which is relatively infrequently used. If a frame has a known size, then all addressing of locals can be done using a stack-front register; if it is dynamically sized, then another frame-pointer register must be used - some ABIs make this easy but not all. The majority of procedures contain no local_allocs, so their addressing of locals can always be done relative to a stack-front; only the others have to use another register for a frame pointer.

The alignment of pointer result is alloca_alignment which must include all SHAPE alignments.

There are two constructors for releasing space generated by local_alloc. To release all such space generated in the current procedure one does local_free_all(); this reduces the size of the current frame to its static size.

The other constructor is local_free whch is effectively a "pop" to local_alloc's "push":

a: EXP OFFSET(x, y)
p: EXP POINTER(alloca_alignment)
   -> EXP TOP

Here p must evaluate to a pointer generated either by local_alloc or last_local . The effect is to free all of the space locally allocated after p. The usual implementation (with a downward growing stack) of this is that p becomes the "top of stack" pointer.

The use of a procedure with an untidy_return is just a generalisation of the idea of local_alloc and the space made available by its use can be freed in the same way as normal local allocations. Of course, given that it could be the result of the procedure it can be structured in an arbitrarily complicated way.

6.4. Heap storage

At the moment, there are no explicit constructors of creating dynamic off-stack storage in TDF. Any off-stack storage requirements must be met by the API in which the system is embedded, using the standard procedural interface. For example, the ANSI C API allows the creation of heap space using standard library procedures like malloc.

  1. [e]

    The vararg construction in C are implemented by giving more actuals than formals; the extra parameters are accessed by offset arithmetic with a pointer to a formal, using parameter_alignment to pad the offsets.

  2. [f]

    If a formal parameter is to be used in this way, it should be marked as having out_par ACCESS in its corresponding TAGSHACC in callers_intro.

7. Control Flow within procedures

  1. 7.1. Unconditional flow
    1. 7.1.1. sequence
  2. 7.2. Conditional flow
    1. 7.2.1. labelled, make_label
    2. 7.2.2. goto, make_local_lv, goto_local_lv, long_jump, return_to_label
    3. 7.2.3. integer_test, NTEST
    4. 7.2.4. case
    5. 7.2.5. conditional, repeat
  3. 7.3. Exceptional flow

7.1. Unconditional flow

7.1.1. sequence

To perform a sequential set of operations in TDF, one uses the constructor sequence:

statements: LIST(EXP)
result:     EXP x
            -> EXP x

Each of the statements are evaluated in order, throwing away their results. Then, result is evaluated and its result is the result of the sequence.

A translator is free to rearrange the order of evaluation if there is no observable difference other than in time or space. This applies anywhere I say “something is evaluated and then ...”. We find this kind of statement in definitions of local variables in section 5.3, and in the controlling parts of the conditional constructions below.

For a more precise discussion of allowable reorderings see () .

7.2. Conditional flow

7.2.1. labelled, make_label

All simple changes of flow of control within a TDF procedure are done by jumps or branches to LABELs, mirroring what actually happens in most computers. There are three constructors which introduce LABELs; the most general is labelled which allows arbitrary jumping between its component EXPs:

placelabs_intro: LIST(LABEL)
starter:         EXP x
places:          LIST(EXP)
                 -> EXP w

Each of the EXPs in places is labelled by the corresponding LABEL in placelabs_intro; these LABELs are constructed by make_label applied to a TDFINT uniquely drawn from the LABEL name-space introduced by the enclosing PROPS. The evaluation starts by evaluating starter; if this runs to completion the result of the labelled is the result of starter. If there is some jump to a LABEL in placelabs_intro then control passes to the corresponding EXP in places and so on. If any of these EXPS runs to completion then its result is the result of the labelled; hence the SHAPE of the result, w, is the LUB of the SHAPEs of the component EXPs.

Note that control does not automatically pass from one EXP to the next; if this is required the appropriate EXP must end with an explicit goto.

7.2.2. goto, make_local_lv, goto_local_lv, long_jump, return_to_label

The unconditional goto is the simplest method of jumping. In common with all the methods of jumping using LABELs, its LABEL parameter must have been introduced in an enclosing construction, like labelled, which scopes it.

One can also pick up a label value of SHAPE POINTER {code} (usually implemented as a program address) using make_local_lv for later use by an "indirect jump" such as goto_local_lv . Here the same prohibition holds - the construction which introduced the LABEL must still be active.

The construction goto_local_lv only permits one to jump within the current procedure; if one wished to do a jump out of a procedure into a calling one, one uses long_jump which requires a pointer to the destination frame (produced by current_env in the destination procedure) as well as the label value. If a long_jump is made to a label, only those local TAGs which have been defined with a visible ACCESS are guaranteed to have preserved their values; the translator could allocate the other TAGs in scope as registers whose values are not necessarily preserved.

A slightly "shorter" long jump is given by return_to_label. Here the destination of the jump must a label value in the calling procedure. Usually this value would be passed as parameter of the call to provide an alternative exit to the procedure.

7.2.3. integer_test, NTEST

Conditional branching is provided by the various _test constructors, one for each primitive SHAPE except BITFIELD. I shall use integer_test as the paradigm for them all:

nt:   NTEST
dest: LABEL
arg1: EXP INTEGER(v)
arg2: EXP INTEGER(v)
      -> EXP TOP

The NTEST nt chooses a dyadic test (e.g. =, ≥, <, etc.) that is to be applied to the results of evaluating arg1 and arg2. If arg1 nt arg2 then the result is TOP; otherwise control passes to the LABEL dest. In other words, integer_test acts like an assertion where if the assertion is false, control passes to the LABEL instead of continuing in the normal fashion.

Some of the constructors for NTESTs are disallowed for some _tests (e.g. proc_test) while others are redundant for some _tests; for example, not_greater_than is the same as less_than_or_equal for all except possibly floating_test. where the use of NaNs (in the IEEE sense) as operands may give different results.

7.2.4. case

There are only two other ways of changing flow of control using LABELs. One arises in ERROR_TREATMENTs which will be dealt with in the arithmetic operations. The other is case:

exhaustive: BOOL
control:    EXP INTEGER(v)
branches:   LIST(CASELIM)
            -> EXP (exhaustive ? BOTTOM : TOP)

Each CASELIM is constructed using make_caselim:

branch: LABEL
lower:  SIGNED_NAT
upper:  SIGNED_NAT
        -> CASELIM

In the case construction, the control EXP is evaluated and tested to see whether its value lies inclusively between some lower and upper in the list of CASELIMs. If so, control passes to the corresponding branch. The order in which these tests are performed is undefined, so it is probably best if the tests are exclusive. The exhaustive flag being true asserts that one of the branches will be taken and so the SHAPE of the result is BOTTOM. Otherwise, if none of the branches are taken, its SHAPE is TOP and execution carries on normally.

7.2.5. conditional, repeat

Besides labelled, two other constructors, conditional and repeat, introduce LABELs which can be used with the various jump instructions. Both can be expressed as labelled, but have extra constraints which make assertions about the use of the LABELs introduced and could usually be translated more efficiently; hence producers are advised to use them where possible. A conditional expression or statement would usually be done using conditional:

alt_label_intro: LABEL
first:           EXP x
alt:             EXP y
                 -> EXP(LUB(x, y))

Here first is evaluated; if it terminates normally, its result is the result of the conditional. If a jump to alt_label_intro occurs then alt is evaluated and its result is the result of the conditional. Clearly, this, so far, is just the same as labelled((alt_label_intro), first, (alt)). However, conditional imposes the constraint that alt cannot use alt_label_intro. All jumps to alt_label_intro are "forward jumps" - a useful property to know in a translator.

Obviously, this kind of conditional is rather different to those found in traditional high-level languages which usually have three components, a boolean expression, a then-part and an else-part. Here, the first component includes both the boolean expression and the then-part; usually we find that it is a sequence of the tests (branching to alt_label_intro) forming the boolean expression followed by the else-part. This formulation means that HLL constructions like "andif" and "orelse" do not require special constructions in TDF.

A simple loop can be expressed using repeat:

repeat_label_intro: LABEL
start:              EXP TOP
body:               EXP y
                    -> EXP y

The EXP start is evaluated, followed by body which is labelled by repeat_label_intro. If a jump to repeat_label_intro occurs in body, then body is re-evaluated. If body terminates normally then its result is the result of the repeat. This is just the same as:

labelled((repeat_label_intro), sequence((start), goto(repeat_label_intro)), (body))

except that no jumps to repeat_label_intro are allowed in start - a useful place to do initialisations for the loop.

Just as with conditionals, the tests for the continuing or breaking the loop are included in body and require no special constructions.

7.3. Exceptional flow

A further way of changing the flow of control in a TDF program is by means of exceptions. TDF exceptions currently arise from three sources - overflow in arithmetic operations with trap ERROR_TREATMENT(see section 8.1.1), an attempt to access a value via a nil pointer using assign_with_mode, contents_with_mode or move_some(see section 7.3) or a stack overflow on procedure entry with PROCPROPS check_stack(see section 5.2.2) or a local_alloc_check.

Each of these exceptions have an ERROR_CODE ascribed to them, namely overflow, nil_access and stack_overflow. Each ERROR_CODE can be made into a distinct NAT by means of the constructor error_val; these NATs could be used, for example, to discriminate the different kinds of errors using a case construction.

When one of these exceptions is raised, the translator will arrange that a TOKEN, ~Throw, is called with the appropriate ERROR_CODE as its (sole) parameter. Given that every language has a different way of both characterising and handling exceptions, the exact expansion of ~Throw must be given by the producer for that language - usually it will involve doing a long_jump to some label specifying a signal handler and translating the ERROR_CODE into its language-specific representation.

The expansion of ~Throw forms part of the language specific environment required for the translation of TDF derived from the language, just as the exact shape of FILE must be given for the translation of C.

8. Values, variables and assignments.

  1. 8.1. contents
  2. 8.2. assign
  3. 8.3. TRANSFER_MODE operations
  4. 8.4. Assigning and extracting bitfields

TAGs in TDF fulfil the role played by identifiers in most programming languages. One can apply obtain_tag to find the value bound to the TAG. This value is always a constant over the scope of a particular definition of the TAG. This may sound rather strange to those used to the concepts of left-hand and right-hand values in C, for example, but is quite easily explained as follows.

If a TAG, id, is introduced by an identify, then the value bound is fixed by its definition argument. If, on the other hand, v was a TAG introduced by a variable definition, then the value bound to v is a pointer to fixed space in the procedure frame (i.e. the left-hand value in C).

8.1. contents

In order to get the contents of this space (the right-hand value in C), one must apply the contents operator to the pointer:

contents(shape(v), obtain_tag(v))

In general, the contents constructor takes a SHAPE and an expression delivering pointer:

s:    SHAPE
arg1: EXP POINTER(x)
      -> EXP s

It delivers the value of SHAPE s, pointed at by the evaluation of arg1. The alignment of s need not be identical to x. It only needs to be included in it; this would allow one, for example, to pick out the first field of a structure from a pointer to it.

8.2. assign

A simple assignment in TDF is done using assign:

arg1: EXP POINTER(x)
arg2: EXP y
      -> EXP TOP

The EXPs arg1 and arg2 are evaluated (no ordering implied) and the value of SHAPE y given by arg2 is put into the space pointed at by arg1. Once again, the alignment of y need only be included in x, allowing the assignment to the first field of a structure using a pointer to the structure. An assignment has no obvious result so its SHAPE is TOP.

Some languages give results to assignments. For example, C defines the result of an assignment to be its right-hand expression, so that if the result of (v = exp) was required, it would probably be expressed as:

identify(empty, newtag, exp, sequence((assign(obtain_tag(v), obtain_tag(newtag))), obtain_tag(newtag)))

From the definition of assign, the destination argument, arg1, must have a POINTER shape. This means that given the TAG id above, assign(obtain_tag(id), lhs) is only legal if the definition of its identify had a POINTER SHAPE. A trivial example would be if id was defined:

identify(empty, id, obtain_tag(v), assign(obtain_tag(id), lhs))

This identifies id with the variable v which has a POINTER SHAPE, and assigns lhs to this pointer. Given that id does not occur in lhs, this is identical to:

assign(obtain_tag(v), lhs).

Equivalences like this are widely used for transforming TDF in translators and other tools (see section 11).

8.3. TRANSFER_MODE operations

The TRANSFER_MODE operations allow one to do assignment and contents operations with various qualifiers to control how the access is done in a more detailed manner than the standard contents and assign operations.

For example, the value assigned in assign has some fixed SHAPE; its size is known at translate-time. Variable sized objects can be moved by move_some:

arg3: EXP OFFSET(z, t)
      -> EXP TOP

The EXP arg1 is the destination pointer, and arg2 is a source pointer. The amount moved is given by the OFFSET arg3.

The TRANSFER_MODE md parameter controls the way that the move will be performed. If overlap is present, then the translator will ensure that the move is equivalent to moving the source into new space and then copying it to the destination; it would probably do this by choosing a good direction in which to step through the value. The alternative, standard_transfer_mode, indicates that it does not matter.

If the TRANSFER_MODE trap_on_nil is present and arg1 is a nil pointer, a TDF exception with ERROR_CODE nil_access is raised.

There are variants of both the contents and assign constructors. The signature of contents_with_mode is:

s:    SHAPE
arg1: EXP POINTER(x)
      -> EXP s

Here, the only significant TRANSFER_MODE constructors md are trap_on_nil and volatile. The latter is principally intended to implement the C volatile construction; it certainly means that the contents_with_mode operation will never be “optimised” away.

Similar considerations apply to assign_with_mode; here the overlap TRANSFER_MODE is also possible with the same meaning as in move_some.

8.4. Assigning and extracting bitfields

Since pointers to bits are forbidden, two special operations are provided to extract and assign bitfields. These require the use of a pointer value and a bitfield offset from the pointer. The signature of bitfield_contents which extracts a bitfield in this manner is:

arg1: EXP POINTER(x)
arg2: EXP OFFSET(y,z)
      -> EXP bitfield(v)

Here arg1 is a pointer to an alignment x which includes v, the required bitfield alignment. In practice, x must include an INTEGER VARIETY whose representation can contain the entire bitfield. Thus on a standard architecture, if v is a 15 bit bitfield, x must include at least a 16 bit integer variety; a 27 bitfield would require a 32 bit integer variety and so on. Indeed the constraint is stronger than this since there must be an integer variety, accessible from arg1, which entirely contains the bitfield.

This constraint means that producers cannot expect that arbitrary bitfield lengths can be accomodated without extra padding; clearly it also means that the maximum bitfield length possible is the maximum size of integer variety that can be implemented on the translator concerned (this is defined to be at least 32). On standard architectures, the producer can expect that an array of bitfields of lenth 2n will be packed without padding; this, of course, includes arrays of booleans. For structures of several different bitfields, he can be sure of no extra padding bits if the total number of bits involved is less than or equal to 32; similarly if he can subdivide the bitfields so that each of the subdivisions (except the last) is exactly equal to 32 and the last is less than or equal to 32. This could be relaxed to 64 bits if the translator deals with 64 bit integer varieties, but would require that conditional TDF is produced to maintain portability to 32 bit platforms - and on these platforms the assurance of close packing would be lost.

Since a producer is ignorant of the exact representational varieties used by a translator, the onus is on the translator writer to provide standard tokens which can be used by a producer to achieve both optimum packing of bitfields and minimum alignments for COMPOUNDs containing them(see Bitfield offsets). These tokens would allow one to construct an offset of the form OFFSET(x, b) (where b is some bitfield alignment and x is the `minimum' alignment which could contain it) in a manner analogous to the normal padding operations for offsets. This offset could then used both in the construction of a compound shape and in the extraction and assignment constructors.

The assignment of bitfields follows the same pattern with the same constraints using bitfield_assign:

arg1: EXP POINTER(x)
arg2: EXP OFFSET(y,z)

9. Operations

  1. 9.1. VARIETY and overflow
  3. 9.3. Division and remainder
  4. 9.4. change_variety
  5. 9.5. and, or, not, xor
  6. 9.6. Floating-point operations, ROUNDING_MODE
  7. 9.7. change_bitfield_to_int, change_int_to_bitfield
  8. 9.8. make_compound, make_nof, n_copies

Most of the arithmetic operations of TDF have familiar analogues in standard languages and processors. They differ principally in how error conditions (e.g. numeric overflow) are handled. There is a wide diversity in error handling in both languages and processors, so TDF tries to reduce it to the simplest primitive level compatible with their desired operation in languages and their implementation on processors. Before delving into the details of error handling, it is worthwhile revisiting the SHAPEs and ranges in arithmetic VARIETYs.

9.1. VARIETY and overflow

An INTEGER VARIETY, for example, is defined by some range of signed natural numbers. A translator will fit this range into some possibly larger range which is convenient for the processor in question. For example, the integers with variety(1,10) would probably be represented as unsigned characters with range (0..255), a convenient representation for both storage and arithmetic.

The question then arises of what is meant by overflow in an operation which is meant to deliver an integer of this VARIETY - is it when the integer result is outside the range (1..10) or outside the range (0..255)? For purely pragmatic reasons, TDF chooses the latter - the result is overflowed when it is outside its representational range (0..255). If the program insists that it must be within (1..10), then it can always test for it. If the program uses the error handling mechanism and the result is outside (1..10) but still within the representational limits, then, in order for the program to be portable, then the error handling actions must in some sense be "continuous" with the normal action. This would not be the case if, for example, the value was used to index an array with bounds (1..10), but will usually be the case where the value is used in further arithmetic operations which have similar error handling. The arithmetic will continue to give the mathematically correct result provided the representational bounds are not exceeded.

The limits in a VARIETY are there to provide a guide to its representation, and not to give hard limits to its possible values. This choice is consistent with the general TDF philosophy of how exceptions are to be treated. If, for example, one wishes to do array-bound checking, then it must be done by explicit tests on the indices and jumping to some exception action if they fail. Similarly, explicit tests can be made on an integer value, provided its representational limits are not exceeded. It is unlikely that a translator could produce any more efficient code, in general, if the tests were implicit. The representational limits can be exceeded in arithmetic operations, so facilities are provided to either to ignore it, to allow one to jump to a label, or to obey a TDF exception handler if it happens.


Taking integer addition as an example, plus has signature:

arg1:   EXP INTEGER(v)
arg2:   EXP INTEGER(v)
        -> EXP INTEGER(v)

The result of the addition has the same integer VARIETY as its parameters. If the representational bounds of v are exceeded, then the action taken depends on the ERROR_TREATMENT ov_err.

The ERROR_TREATMENT, impossible, is an assertion by the producer that overflow will not occur; on its head be it if it does.

The ERROR_TREATMENTS continue and wrap give "fixup" values for the result. For continue the fixup value is undefined. For wrap, the the answer will be modulo 2 to the power of the number of bits in the representational variety.Thus, integer arithmetic with byte representational variety is done modulo 256. This just corresponds to what happens in most processors and, incidentally, the definition of C.

The ERROR_TREATMENT that one would use if one wished to jump to a label is error_jump:

lab: LABEL

A branch to lab will occur if the result overflows.

The ERROR_TREATMENT, trap(overflow) will raise a TDF exception(see section 6.3) with ERROR_CODE overflow if overflow occurs.

9.3. Division and remainder

The various constructors in involving integer division (e.g. div1, rem1) have two ERROR_TREATMENT parameters, one for overflow and one for divide-by-zero e.g. div1 is:

div_by_zero_error: ERROR_TREATMENT
ov_err:            ERROR_TREATMENT
arg1:              EXP INTEGER(v)
arg2:              EXP INTEGER(v)
                   -> EXP INTEGER(v)

There are two different kinds of division operators (with corresponding remainder operators) defined. The operators div2 and rem2 are those generally implemented directly by processor instructions giving the sign of the remainder the same as the sign of the quotient. The other pair, div1 and rem1, is less commonly implemented in hardware, but have rather more consistent mathematical properties; here the sign of remainder is the same as the sign of divisor. Thus, div1(x, 2) is the same as shift_right(x, 1) which is only true for div2 if x is positive. The two pairs of operations give the same results if both operands have the same sign. The constructors div0 and rem0 allow the translator to choose whichever of the two forms of division is convenient - the producer is saying that he does not care which is used, as long as they are pairwise consistent. The precise definition of the divide operations is given in ().

9.4. change_variety

Conversions between the various INTEGER varieties are provided for by change_variety:

r:      VARIETY
arg1:   EXP INTEGER(v)
        -> EXP INTEGER(r)

If the value arg1 is outside the limits of the representational variety of r, then the ERROR_TREATMENT ov_err will be invoked.

9.5. and, or, not, xor

The standard logical operations, and, not, or and xor are provided for all integer varieties. Since integer varieties are defined to be represented in twos-complement the result of these operations are well defined.

9.6. Floating-point operations, ROUNDING_MODE

All of the floating-point (including complex) operations include ERROR-TREATMENTs. If the result of a floating-point operation cannot be represented in the desired FLOATING_VARIETY, the error treatment is invoked. If the ERROR_TREATMENT is wrap or impossible, the result is undefined; otherwise the jump operates in the same way as for integer operations. Both floating_plus and floating_mult are defined as n-ary operations. In general, floating addition and multiplication are not associative, but a producer may not care about the order in which they are to be performed. Making them appear as though they were associative allows the translator to choose an order which is convenient to the hardware.

Conversions from integer to floating are done by float_int and from floating to integers by round_with_mode . This latter constructor has a parameter of SORT ROUNDING_MODE which effectively gives the IEEE rounding mode to be applied to the float to produce its integer result.

One can extract the real and imaginary parts of a complex FLOATING using real_part and imaginary_part. A complex FLOATING can be constructed using make_complex. Normal complex arithmetic applies to all the other FLOATING constructors except for those explicitly excluded (eg floating_abs, floating_max etc.)

9.7. change_bitfield_to_int, change_int_to_bitfield

There are two bit-field operation, change_bitfield_to_int and change_int_to_bitfield to transform between bit-fields and integers. If the varieties do not fit the result is undefined; the producer can always get it right.

9.8. make_compound, make_nof, n_copies

There is one operation to make values of COMPOUND SHAPE, make_compound:

arg1: EXP OFFSET(base, y)
arg2: LIST(EXP)
      -> EXP COMPOUND(sz)

The OFFSET arg1 is evaluated as a translate-time constant to give sz, the size of the compound object. The EXPs of arg2 are alternately OFFSETs (also translate-time constants) and values which will be placed at those offsets. This constructor is used to construct values given by structure displays; in C, these only occur with constant val[i] in global definitions. It is also used to provide union injectors; here sz would be the size of the union and the list would probably two elements with the first being an offset_zero.

Constant sized array values may be constructed using make_nof, make_nof_int, and n_copies. Again, they only occur in C as constants in global definitions.

10. Constants

  1. 10.1. _cond constructors
  2. 10.2. Primitive constant constructors

The representation of constants clearly has peculiar difficulties in any architecture neutral format. Leaving aside any problems of how numbers are to be represented, we also have the situation where a "constant" can have different values on different platforms. An obvious example would be the size of a structure which, although it is a constant of any particular run of a program, may have different values on different machines. Further, this constant is in general the result of some computation involving the sizes of its components which are not known until the platform is chosen. In TDF, sizes are always derived from some EXP OFFSET constructed using the various OFFSET arithmetic operations on primitives like shape_offset and offset_zero. Most such EXP OFFSETs produced are in fact constants of the platform; they include field displacements of structure as well as their sizes. TDF assumes that, if these EXPs can be evaluated at translate-time (i.e. when the sizes and alignments of primitive objects are known), then they must be evaluated there. An example of why this is so arises in make_compound; the SHAPE of its result EXP depends on its arg1 EXP OFFSET parameter and all SHAPEs must be translate-time values.

An initialisation of a TAGDEF is a constant in this sense; [g] this allows one to ignore any difficulties about their order of evaluation in the UNIT and consequently the order of evaluation of UNITs. Once again all the EXPs which are initialisations must be evaluated before the program is run; this obviously includes any make_proc or make_general_proc. . The limitation on an initialisation EXP to ensure this is basically that one cannot take the contents of a variable declared outside the EXP after all tokens and conditional evaluation is taken into account. In other words, each TDF translator effectively has an TDF interpreter which can do evaluation of expressions (including conditionals etc.) involving only constants such as numbers, sizes and addresses of globals. This corresponds very roughly to the kind of initialisations of globals that are permissible in C; for a more precise definition, see ().

10.1. _cond constructors

Another place where translate-time evaluation of constants is mandated is in the various _cond constructors which give a kind of "conditional compilation" facility; every SORT which has a SORTNAME, other that TAG, TOKEN and LABEL, has one of these constructors e.g. exp_cond:

control: EXP INTEGER(v)
e1:      BITSTREAM EXP x
e2:      BITSTREAM EXP y
         -> EXP x or EXP y

The constant, control, is evaluated at translate time. If it is not zero the entire construction is replaced by the EXP in e1; otherwise it is replaced by the one in e2. In either case, the other BITSTREAM is totally ignored; it even does not need to be sensible TDF. This kind of construction is use extensively in C pre-processing directives e.g.:

#if (sizeof(int) == sizeof(long)) ...

10.2. Primitive constant constructors

Integer constants are constructed using make_int:

v:     VARIETY
       -> EXP INTEGER(v)

The SIGNED_NAT value is an encoding of the binary value required for the integer; this value must lie within the limits given by v. I have been rather slip-shod in writing down examples of integer constants earlier in this document; where I have written 1 as an integer EXP, for example, I should have written make_int(v, 1) where v is some appropriate VARIETY.

Constants for both floats and strings use STRINGs. A constant string is just an particular example of make_nof_int:

str: STRING(k, n)
     -> EXP NOF(n, INTEGER(v))

Each unsigned integer in str must lie in the variety v and the result is the constant array whose elements are the integers considered to be of VARIETY v. An ASCII-C constant string might have v = variety(-128,127) and k = 7; however, make_nof_int can be used to make strings of any INTEGER VARIETY; a the elements of a Unicode string would be integers of size 16 bits.

A floating constant uses a STRING which contains the ASCII characters of a expansion of the number to some base in make_floating:

sign:     BOOL
mantissa: STRING(k, n)
base:     NAT
exponent: SIGNED_NAT
          -> EXP FLOATING(f)

For a normal floating point number, each integer in mantissa is either the ASCII `.'-symbol or the ASCII representation of a digit of the representation in the given base; i.e. if c is the ASCII symbol, the digit value is c-'0'. The resulting floating point number has SHAPE FLOATING(f) and value mantissa * base exponent rounded according to rm. Usually the base will be 10 (sometimes 2) and the rounding mode to_nearest. Any floating-point evaluation of expressions done at translate-time will be done to an accuracy greater that implied by the FLOATING_VARIETY involved, so that floating constants will be as accurate as the platform permits.

The make_floating construct does not apply apply to a complex FLOATING_VARIETY f; to construct a complex constant use make_complex with two make_floating arguments.

Constants are also provided to give unique null values for pointers, label values and procs i.e.: make_null_ptr, make_null_local_lv and make_null_proc. Any significant use of these values (e.g. taking the contents of a null pointer) is undefined, but they can be assigned and used in tests in the normal way.

  1. [g]

    However see also initial_value in section 3.2.

11. Tokens and APIs

  1. 11.1. Application programming interfaces
  2. 11.2. Linking to APIs
  3. 11.3. Target independent headers, unique_extern
  4. 11.4. Language programming interfaces
  5. 11.5. Namespaces and APIs

All of the examples of the use of TOKENs so far given have really been as abbreviations for commonly used constructs, e.g. the EXP OFFSETS for fields of structures. However, the real justification for TOKENs are their use as abstractions for things defined in libraries or application program interfaces (APIs).

11.1. Application programming interfaces

APIs usually do not give complete language definitions of the operations and values that they contain; generally, they are defined informally in English giving relationships between the entities within them. An API designer should allow implementors the opportunity of choosing actual definitions which fit their hardware and the possibility of changing them as better algorithms or representations become available.

The most commonly quoted example is the representation of the type FILE and its related operations in C. The ANSI C definition gives no common representation for FILE; its implementation is defined to be platform-dependent. A TDF producer can assume nothing about FILE; not even that it is a structure. The only things that can alter or create FILEs are also entities in the Ansi-C API and they will always refer to FILEs via a C pointer. Thus TDF abstracts FILE as a SHAPE TOKEN with no parameters, make_tok(T_FILE) say. Any program that uses FILE would have to include a TOKDEC introducing T_FILE:

make_tokdec(T_FILE, empty, shape())

and anywhere that it wished to refer to the SHAPE of FILE it would do:

shape_apply_token(make_tok(T_FILE), ())

Before this program is translated on a given platform, the actual SHAPE of FILE must be supplied. This would be done by linking a TDF CAPSULE which supplies the TOKDEF for the SHAPE of FILE which is particular to the target platform.

Many of the C operations which use FILEs are explicitly allowed to be expanded as either procedure calls or as macros. For example, putc(c,f) may be implemented either as a procedure call or as the expansion of macro which uses the fields of f directly. Thus, it is quite natural for putc(c, f) to be represented in TDF as an EXP TOKEN with two EXP parameters which allows it to be expanded in either way. Of course, this would be quite distinct from the use of putc as a value (as a proc parameter of a procedure for example) which would require some other representation. One such representation that comes to mind might be to simply to make a TAGDEC for the putc value, supplying its TAGDEF in the ANSI API CAPSULE for the platform. This might prove to be rather short-sighted, since it denies us the possibility that the putc value itself might be expanded from other values and hence it would be better as another parameterless TOKEN. I have not come across an actual API expansion for the putc value as other than a simple TAG; however the FILE* value stdin is sometimes expressed as:

#define stdin &_iob[0]

which illustrates the point. It is better to have all of the interface of an API expressed as TOKENs to give both generality and flexibility across different platforms.

11.2. Linking to APIs

In general, each API requires platform-dependent definitions to be supplied by a combination of TDF linking and system linking for that platform. This is illustrated in the following diagram giving the various phases involved in producing a runnable program.

Portable program source .c/.cc Target- indepdendent API headers .h Target- indepdendent capsule .j Target- dependent linked capsule .t Assembly source .s Binary object .o Other target- dependent capsules .t Other binary object files .o Other target- independent capsules .j System libraries .o/.a/.so Capsules specific to platform P .j Executable = TOKEN declarations with definitions in ... producer ... often using names defined in ... tld trans as ld On any platform On any platform with P-specific API linkages On platform P
Figure 9. TDF Production, Linking and Translating

There will be CAPSULEs for each API on each platform giving the expansions for the TOKENs involved, usually as uses of identifiers which will be supplied by system linking from some libraries. These CAPSULEs would be derived from the header files on the platform for the API in question, usually using some automatic tools. For example, there will be a TDF CAPSULE (derived from <stdio.h>) which defines the TOKEN T_FILE as the SHAPE for FILE, together with definitions for the TOKENs for putc, stdin, etc., in terms of identifiers which will be found in the library libc.a.

11.3. Target independent headers, unique_extern

Any producer which uses an API will use system independent information to give the common interface TOKENs for this API. In the C producer, this is provided by header files using pragmas, which tell the producer which TOKENs to use for the particular constructs of the API . In any target-independent CAPSULE which uses the API, these TOKENs would be introduced as TOKDECs and made globally accessible by using make_linkextern. For a world-wide standard API, the EXTERNAL "name" for a TOKEN used by make_linkextern should be provided by an application of unique_extern on a UNIQUE drawn from a central repository of names for entities in standard APIs; this repository would form a kind of super-standard for naming conventions in all possible APIs. The mechanism for controlling this super-standard has yet to be set up, so at the moment all EXTERN names are created by string_extern.

An interesting example in the use of TOKENs comes in abstracting field names. Often, an API will say something like "the type Widget is a structure with fields alpha, beta ..." without specifying the order of the fields or whether the list of fields is complete. The field selection operations for Widget should then be expressed using EXP OFFSET TOKENs; each field would have its own TOKEN giving its offset which will be filled in when the target is known. This gives implementors on a particular platform the opportunity to reorder fields or add to them as they like; it also allows for extension of the standard in the same way.

The most common SORTs of TOKENs used for APIs are SHAPEs to represent types, and EXPs to represent values, including procedures and constants. NATs and VARIETYs are also sometimes used where the API does not specify the types of integers involved. The other SORTs are rarely used in APIs; indeed it is difficult to imagine any realistic use of TOKENs of SORT BOOL. However, the criterion for choosing which SORTs are available for TOKENisation is not their immediate utility, but that the structural integrity and simplicity of TDF is maintained. It is fairly obvious that having BOOL TOKENs will cause no problems, so we may as well allow them.

11.4. Language programming interfaces

So far, I have been speaking as though a TOKENised API could only be some library interface, built on top of some language, like xpg3, posix, X etc. on top of C. However, it is possible to consider the constructions of the language itself as ideal candidates for TOKENisation. For example, the C for-statement could be expressed as TOKEN with four parameters. This TOKEN could be expanded in TDF in several different ways, all giving the correct semantics of a for-statement. A translator (or other tools) could choose the expansion it wants depending on context and the properties of the parameters. The C producer could give a default expansion which a lazy translator writer could use, but others might use expansions which might be more advantageous. This idea could be extended to virtually all the constructions of the language, giving what is in effect a C-language API; perhaps this might be called more properly a language programming interface (LPI). Thus, we would have TOKENs for C for-statements, C conditionals, C procedure calls, C procedure definitions etc. [h]

The notion of a producer for any language working to an LPI specific to the constructs of the language is very attractive. It could use different TOKENs to reflect the subtle differences between uses of similar constructs in different languages which might be difficult or impossible to detect from their expansions, but which could allow better optimisations in the object code. For example, Fortran procedures are slightly different from C procedures in that they do not allow aliasing between parameters and globals. While application of the standard TDF procedure calls would be semantically correct, knowledge of that the non-aliasing rule applies would allow some procedures to be translated to more efficient code. A translator without knowledge of the semantics implicit in the TOKENs involved would still produce correct code, but one which knew about them could take advantage of that knowledge.

I also think that LPIs would be a very useful tool for crystalising ideas on how languages should be translated, allowing one to experiment with expansions not thought of by the producer writer. This decoupling is also an escape clause allowing the producer writer to defer the implementation of a construct completely to translate-time or link-time, as is done at the moment in C for off-stack allocation. As such it also serves as a useful test-bed for TOKEN constructions which may in future become new constructors of core TDF.

11.5. Namespaces and APIs

Namespace problems are amongst the most difficult faced by standard defining bodies (for example, the ANSI and POSIX committees) and they often go to great lengths to specify which names should, and should not, appear when certain headers are included. (The position is set out in D. F. Prosser, Header and name space rules for UNIX systems (private communication), USL, 1993.)

For example, the intention, certainly in ANSI, is that each header should operate as an independent sub-API. Thus va_list is prohibited from appearing in the namespace when stdio.h is included (it is defined only in stdarg.h) despite the fact that it appears in the prototype:

int vprintf ( char *, va_list ) ;

This seeming contradiction is worked round on most implementations by defining a type __va_list in stdio.h which has exactly the same definition as va_list, and declaring vprintf as:

int vprintf ( char *, __va_list ) ;

This is only legal because __va_list is deemed not to corrupt the namespace because of the convention that names beginning with __ are reserved for implementation use.

This particular namespace convention is well-known, but there are others defined in these standards which are not generally known (and since no compiler I know tests them, not widely adhered to). For example, the ANSI header errno.h reserves all names given by the regular expression:


against macros (i.e. in all namespaces). By prohibiting the user from using names of this form, the intention is to protect against namespace clashes with extensions of the ANSI API which introduce new error numbers. It also protects against a particular implementation of these extensions - namely that new error numbers will be defined as macros.

A better example of protecting against particular implementations comes from POSIX. If sys/stat.h is included names of the form:


are reserved against macros (as member names). The intention here is not only to reserve field selector names for future extensions to struct stat (which would only affect API implementors, not ordinary users), but also to reserve against the possibility that these field selectors might be implemented by macros. So our st_atime example in section 2.2.3 is strictly illegal because the procedure name st_atime lies in a restricted namespace. Indeed the namespace is restricted precisely to disallow this program.

As an exercise to the reader, how many of your programs use names from the following restricted namespaces (all drawn from ANSI, all applying to all namespaces)?

is[a-z][0-9a-z_A-Z]+ (ctype.h) to[a-z][0-9a-z_A-Z]+ (ctype.h) str[a-z][0-9a-z_A-Z]+ (stdlib.h)

With the TDF approach of describing APIs in abstract terms using the #pragma token syntax most of these namespace restrictions are seen to be superfluous. When a target independent header is included precisely the objects defined in that header in that version of the API appear in the namespace. There are no worries about what else might happen to be in the header, because there is nothing else. Also implementation details are separated off to the TDF library building, so possible namespace pollution through particular implementations does not arise.

Currently TDF does not have a neat way of solving the va_list problem. The present target independent headers use a similar workaround to that described above (exploiting a reserved namespace). (See the footnote in section

None of this is intended as criticism of the ANSI or POSIX standards. It merely shows some of the problems that can arise from the insufficient separation of code.

  1. [h]

    Exercise for the reader: what are the SORTs of these parameters?

    The current C producer does this for some of the constructs, but not in any systematic manner; perhaps it will change.

12. TDF transformations

  1. 12.1. Transformations as definitions
  2. 12.2. Examples of transformations
  3. 12.3. Programs with undefined values

TDF to TDF transformations form the basis of most of the tools of TDF, including translators. TDF has a rich set of easily performed transformations; this is mainly due to its algebraic nature, the liberality of its sequencing rules, and the ease with which one can introduce new names over limited scopes. For example, a translator is always free to transform:

assign(e1, e2)


identify(empty, new_tag, e1, assign(obtain_tag(new_tag), e2))

i.e. identify the evaluation of the left-hand side of the assignment with a new TAG and use that TAG as the left-hand operand of a new assignment in the body of the identification. Note that the reverse transformation is only valid if the evaluation of e1 does not side-effect the evaluation of e2. A producer would have had to use the second form if it wished to evaluate e1 before e2. The definition of assign allows its operands to be evaluated in any order while identify insists that the evaluation of its definition is conceptually complete before starting on its body.

Why would a translator wish to make the more complicated form from the simpler one? This would usually depend on the particular forms of e1 and e2 as well as the machine idioms available for implementing the assignment. If, for example, the joint evaluation of e1 and e2 used more evaluation registers than is available, the transformation is probably a good idea. It would not necessarily commit one to putting the new tag value into the stack; some other more global criteria might lead one to allocate it into a register disjoint from the evaluation registers. In general, this kind of transformation is used to modify the operands of TDF constructions so that the code-production phase of the translator can just "churn the handle" knowing that the operands are already in the correct form for the machine idioms.

Transformations like this are also used to give optimisations which are largely independent of the target architecture. In general, provided that the sequencing rules are not violated, any EXP construction, F(X), say, where X is some inner EXP, can be replaced by:

identify(empty, new_tag, X, F(obtain_tag(new_tag))).

This includes the extraction of expressions which are constant over a loop; if F was some repeat construction and one can show that the EXP X is invariant over the repeat, the transformation does the constant extraction.

Most of the transformations performed by translators are of the above form, but there are many others. Particular machine idioms might lead to transformations like changing a test (i>=1) to (i>0) because the test against zero is faster; replacing multiplication by a constant integer by shifts and adds because multiplication is slow; and so on. Target independent transformations include things like procedure inlining and loop unrolling. Often these target independent transformations can be profitably done in terms of the TOKENs of an LPI; loop unrolling is an obvious example.

12.1. Transformations as definitions

As well being a vehicle for expressing optimisation, TDF transformations can be used as the basis for defining TDF. In principle, if we were to define all of the allowable transformations of the TDF constructions, we would have a complete definition of TDF as the initial model of the TDF algebra. This would be a fairly impracticable project, since the totality of the rules including all the simple constructs would be very unwieldy, difficult to check for inconsistencies and would not add much illumination to the definition. However, knowledge of allowable transformations of TDF can often answer questions of the meaning of diverse constructs by relating them to a single construct. What follows is an alphabet of generic transformations which can often help to answer knotty questions. Here, E[X \ Y] denotes an EXP E with all internal occurrences of X replaced by Y.

If F is any non order-specifying [i] EXP constructor and E is one of the EXP operands of F, then:

F(..., E, ...) xde identify(empty, newtag, E, F(..., obtain_tag(newtag), ...))

If E is a non side-effecting [j] EXP and none of the variables used in E are assigned to in B: identify(v, tag, E, B) xde B[obtain_tag(tag) \ E]

If all uses of tg in B are of the form contents(shape(E), obtain_tag(tg)):

variable(v, tg, E, B) xde identify(v, nt, E, B[contents(shape(E), obtain_tag(tg)) \ obtain_tag(nt)])
sequence((S1, ..., Sn),
         sequence((P1, ..., Pm), R) xdb
         sequence((S1, ..., Sn, P1, ..., Pm), R)

If Si = sequence((P1, ..., Pm), R):

sequence((S1, ..., Sn), T) xdb
         sequence((S1, ...,
                  Si-1, P1, ..., Pm, R,
                  Si+1, ..., Sn), T) E xdb sequence((), E)

If D is either identify or variable:

D(v, tag, sequence((S1, ..., Sn), R), B) xde
                   sequence((S1, ..., Sn), D(v, tag, R, B))

If Si is an EXP BOTTOM, then:

sequence((S1, S2, ... Sn), R) xde
sequence((S1, ...  Si - 1), Si)

If E is an EXP BOTTOM, and if D is either identify or variable:

D(v, tag, E, B) xde E

If Si is make_top(), then:

S2, ...  Sn), R) xdb
sequence((S1, ...  Si - 1,
Si + 1, ...Sn), R)

If Sn is an EXP TOP:

sequence((S1, ...
          Sn), make_top()) xdb sequence((S1, ...,
          Sn - 1), Sn)

If E is an EXP TOP and E is not side-effecting then:

E xde make_top()

If C is some non order-specifying and non side-effecting constructor, and Si is C(P1,..., Pm) where P1..m are the EXP operands of C:

sequence((S1, ..., Sn), R) xde
sequence((S1, ..., Si - 1, P1,
          ..., Pm, Si + 1, ..., Sn), R)

If none of the Si use the label L:

            sequence((S1, ..., Sn), R), A) xde
            sequence((S1, ..., Sn), conditional(L, R, A))

If there are no uses of L in X: [k]

conditional(L, X, Y) xde X conditional(L, E, goto(Z)) xde E[L \ Z]

If EXP X contains no use of the LABEL L:

conditional(L, conditional(M, X, Y), Z) xde conditional(M, X, conditional(L, Y, Z))
repeat(L, I, E) xde sequence((I), repeat(L, make_top(), E))
repeat(L, make_top(), E) xde conditional(Z, E[L \ Z], repeat(L, make_top(), E))

If there are no uses of L in E:

repeat(L, make_top(), sequence((S, E), make_top())) xde
conditional(Z, S[L \ Z], repeat(L, make_top(), sequence((E, S), make_top())))

If f is a procedure defined [l] as:

make_proc(rshape, formal1 .. n, vtg,
          B(return R1, ..., return Rm))


formali = make_tagshacc(si, vi, tgi)

and B is an EXP with all of its internal return constructors indicated parametrically then:

if Ai has SHAPE si apply_proc(rshape, f, (A1, ..., An), V) xde
variable(empty, newtag, make_value((rshape=BOTTOM) ?  TOP : rshape),
         labelled((L), variable(v1, tg1, A1, ...,
                  variable(vn, tgn, An,
                  variable(empty, vtg, V,
                  B(sequence(assign(obtain_tag(newtag), R1), goto(L)),
                    sequence(assign(obtain_tag(newtag), Rm), goto(L))))),
                  contents(rshape, obtain_tag(newtag))))
assign(E, make_top()) xde sequence((E), make_top())
contents(TOP, E) xde sequence((E), make_top())
make_value(TOP) xde make_top()
component(s, contents(COMPOUND(S), E), D) xde contents(s, add_to_ptr(E, D))
make_compound(S, ((E1, D1), ..., (En, Dn))) xde variable(empty, nt, make_value(COMPOUND(S)),
    sequence((assign(add_to_ptr(obtain_tag(nt), D1), E1), ...,
              assign(add_to_ptr(obtain_tag(nt), Dn), En)),
             contents(S, obtain_tag(nt))))

12.2. Examples of transformations

Any of these transformations may be performed by the TDF translators. The most important is probably {A} which allows one to reduce all of the EXP operands of suitable constructors to obtain_tags. The expansion rules for identification, {G}, {H} and {I}, gives definition to complicated operands as well as strangely formed ones, e.g. return(... return(X)...). Rule {A} also illustrates neatly the lack of ordering constraints on the evaluation of operands. For example, mult(et, exp1, exp2) could be expanded by applications of {A} to either:

identify(empty, t1, exp1, identify(empty, t2, exp2, mult(et, obtain_tag(t1), obtain_tag(t2))))


identify(empty, t2, exp2, identify(empty, t1, exp1, mult(et, obtain_tag(t1), obtain_tag(t2))))

Both orderings of the evaluations of exp1 and exp2 are acceptable, regardless of any side-effects in them. There is no requirement that both expansions should produce the same answer for the multiplications; the only person who can say whether either result is "wrong" is the person who specified the program.

Many of these transformations often only come into play when some previous transformation reveals some otherwise hidden information. For example, after procedure in-lining given by {U} or loop un-rolling given by {S}, a translator can often deduce the behaviour of a _test constructor, replacing it by either a make_top or a goto. This may allow one to apply either {J} or {H} to eliminate dead code in sequences and in turn {N} or {P} to eliminate entire conditions and so on.

Application of transformations can also give expansions which are rather pathological in the sense that a producer is very unlikely to form them. For example, a procedure which returns no result would have result statements of the form return(make_top()). In-lining such a procedure by {U} would have a form like:

variable(empty, nt, make_shape(TOP),
         ... sequence((assign(obtain_tag(nt), make_top())),
goto(L)) ...
         contents(TOP, obtain_tag(nt))

The rules {V}, {W} and {X} allow this to be replaced by:

variable(empty, nt, make_top(),
         ... sequence((obtain_tag(nt)), goto(L)) ...
         sequence((obtain_tag(nt)), make_top())

The obtain_tags can be eliminated by rule {M} and then the sequences by {F}. Sucessive applications of {C} and {B} then give:

         ... goto(L) ...

12.3. Programs with undefined values

The definitions of most of the constructors in the TDF specification are predicated by some conditions; if these conditions are not met the effect and result of the constructor is not defined for all possible platforms. [m] Any value which is dependent on the effect or result of an undefined construction is also undefined. This is not the same as saying that a program is undefined if it can construct an undefined value - the dynamics of the program might be such that the offending construction is never obeyed.

  1. [i]

    The order-specifying constructors are conditional, identify, repeat, labelled, sequence and variable.

  2. [j]

    A sufficient condition for not side-effecting in this sense is that there are no apply_procs or local_allocs in E; that any assignments in E are to variables defined in E; and that any branches in E are to labels defined in conditionals in E.

  3. [k]

    There are analogous rules for labelled and repeat with unused LABELs.

  4. [l]

    This has to be modified if B contains any uses of local_free_all or last_local.

  5. [m]

    However, we may find that the mapping of a constraint allows extra relationships for a class of architectures which do not hold in all generality; this may mean that some constructions are defined on this class while still being undefined in others (see section 13).

13. TDF expansions of offsets

  1. 13.1. Offsets within compound types
  2. 13.2. Bitfield offsets

13.1. Offsets within compound types

Consider the C structure defined by:

typedef struct{ int i; double d; char c;} mystruct;

Given that sh_int, sh_char and sh_double are the SHAPEs for int, char and double, the SHAPE of mystruct is constructed by:

SH_mystruct = compound(S_c)


S_c = offset_add(O_c, shape_offset(sh_char))


O_c = offset_pad(alignment(sh_char), S_d)


S_d = offset_add(O_d, shape_offset(sh_double))


O_d = offset_pad(alignment(sh_double), S_i)

where: [n]

S_i = offset_add(O_i, shape_offset(sh_int))


O_i = offset_zero(alignment(sh_int))

Each of S_c, S_d and S_i gives the minimum "size" of the space required to upto and including the field c, d and i respectively. Each of O_c, O_d and O_i gives the OFFSET "displacement" from a pointer to a mystruct required to select the fields c, d and i respectively. The C program fragment:

mystruct s;
.... s.d = 1.0; ...

would translate to something like:

variable(empty, tag_s, make_value(compound(S_c)),
         assign(add_to_ptr(obtain_tag(tag_s), O_d), 1.0),

Each of the OFFSET expressions above are ideal candidates for tokenisation; a producer would probably define tokens for each of them and use exp_apply_token to expand them at each of their uses.

From the definition, we find that:

S_c = shape_offset(SH_mystruct)
			i.e. an OFFSET(alignment(sh_int) xc8  alignment(sh_char) xc8  alignment(sh_double), {})

This would not be the OFFSET required to describe sizeof(mystruct) in C, since this is defined to be the difference between successive elements an array of mystructs. The sizeof OFFSET would have to pad S_c to the alignment of SH_mystruct:

offset_pad(alignment(SH_mystruct), S_c)

This is the OFFSET that one would use to compute the displacement of an element of an array of mystructs using offset_mult with the index.

The most common use of OFFSETs is in add_to_ptr to compute the address of a structure or array element. Looking again at its signature in a slightly different form:

arg1: EXP POINTER(y xc8 A)
arg2: EXP OFFSET(y, z)
      -> EXP POINTER(z)
      ... for any ALIGNMENT A

one sees that arg2 can measure an OFFSET from a value of a "smaller" alignment than the value pointed at by arg1. If arg2 were O_d, for example, then arg1 could be a pointer to any structure of the form struct {int i, double d,...} not just mystruct. The general principle is that an OFFSET to a field constructed in this manner is independent of any fields after it, corresponding to normal usage in both languages and machines. A producer for a language which conflicts with this would have to produce less obvious TDF, perhaps by re-ordering the fields, padding the offsets by later alignments or taking maxima of the sizes of the fields.

13.2. Bitfield offsets

Bitfield offsets are governed by rather stricter rules. In order to extract or assign a bitfield, we have to find an integer variety which entirely contains the bitfield. Suppose that we wished to extract a bitfield by:

bitfield_contents(v, p:POINTER(X), b:OFFSET(Y, B))

Y must be an alignment(I) where I is some integer SHAPE, contained in X. Further, this has to be equivalent to:

bitfield_contents(v, add_ptr(p, d:OFFSET(Y,Y)), b':OFFSET(Y, B))

for some d and b' such that:

offset_pad(v, shape_offset(I)) >= b'


offset_add(offset_pad(v, offset_mult(d, sizeof(I)), b') = b

Clearly, we have a limitation on the length of bitfields to the maximum integer variety available; in addition, we cannot have a bitfield which overlaps two such varieties.

The difficulties inherent in this may be illustrated by attempting to construct an array of bitfields using the nof constructor. Assuming a standard architecture, we find that we cannot usefully define an object of SHAPE nof(N, bitfield(bfvar_bits(b, M))) without padding this shape out to some integer variety which can contain M bits. In addition, they can only be usefully indexed (using bitfield_contents)either if M is some power of 2 or M*N is less than the length of the maximum integer variety. Thus a producer must be sure that these conditions hold if he is to generate and index this object simply. Even here he is in some dificulty, since he does not know the representational varieties of the integers available to him; also it is difficult for him to ensure that the alignment of the entire array is in some sense minimal. Similar difficulties occur with bitfields in structures - they are not restricted to arrays.

The solution to this conundrum in its full generality requires knowledge of the available representational varieties. Particular languages have restrictions which means that sub-optimal solutions will satisfy its specification on the use of bitfields. For example, C is satisfied with bitfields of maximum length 32 and simple alignment constraints. However, for the general optimal solution, I can see no reasonable alternative to the installer defining some tokens to produce bitfield offsets which are guaranteed to obey the alignment rules and also give optimal packing of fields and alignments of the total object for the platform in question. I believe that three tokens are sufficient to do this; these are analogous to the constructors offset_zero, offset_pad and offset_mult with ordinary alignments and their signatures could be:

n:        NAT
issigned: BOOL
          -> EXP OFFSET(A, bfvar_bits(issigned, n))

Here the result is a zero offset to the bitfield with “minimum” integer variety alignment A.

n:        NAT
issigned: BOOL
sh:       SHAPE
          -> EXP OFFSET(alignment(sh) xc8  A, bfvar_bits(issigned, n))

Here the result is the shape_offset of sh padded with the `minimum' alignment A so that it can accomodate the bitfield. Note that this may involve padding sh with the alignment of the maximum integer variety if there are not enough bits left at the end of sh.

n:        NAT
issigned: BOOL
ind:      EXP INTEGER(v)
          -> EXP OFFSET(A, bfvar_bits(issigned, n))

Here the result is an offset which gives the displacement of indth element of an array of n-bit bitfields with `minimum' alignment A. Note that this will correspond to a normal multiplication only if n is a power of 2 or ind * n <= length of the maximum integer variety.

These tokens can be expressed in TDF if the lengths of the available varieties are known, i.e., they are installer defined.[o] They ought to be used in place of offset_zero, offset_pad and offset_mult whereever the alignment or shape (required to construct a SHAPE or an argument to the bitfield constructs) is a pure bitfield. The constructor nof should never be used on a pure bitfield; instead it should be replaced by:

S = compound(~Bitfield_offset_mult(M, b, N))

to give a shape, S, representing an array of N M-bit bitfields. This may not be just N*M bits; for example ~Bitfield_offset_mult may be implemented to pack an array of 3-bit bitfields as 10 fields to a 32-bit word. In any case, one would expect that normal rules for offset arithmetic are preserved, e.g.

offset_add(~Bitfield_offset_pad(M, b, S), size(bitfield(bfvar_bits(b, N)))) =
    ~Bitfield_offset_mult(M, b, N + 1)

where size(X) = offset_pad(alignment(X), shape_offset(X))
  1. [n]

    I could equally have given simply shape_offset(sh_int) for S_i, but the above formulation is more uniform with respect to selection OFFSETs.

  2. [o]

    For most architectures, these definition are dependent only on a few constants such as the maximum length of bitfield., expessed as tokens for the target. The precise specification of such target dependent tokens is of current interest outside the scope of this document.

14. Models of the TDF algebra

  1. 14.1. Model for a 32-bit standard architecture
    1. 14.1.1. Alignment model
    2. 14.1.2. Offset and pointer model
    3. 14.1.3. Size model
  2. 14.2. Model for machines like the iAPX-432
    1. 14.2.1. Alignment model
    2. 14.2.2. Offset and pointer model
    3. 14.2.3. Size model
    4. 14.2.4. Offset arithmetic

TDF is a multi-sorted abstract algebra. Any implementation of TDF is a model of this algebra, formed by a mapping of the algebra into a concrete machine. An algebraic mapping gives a concrete representation to each of the SORTs in such a way that the representation of any construction of TDF is independent of context; it is a homomorphism. In other words if we define the mapping of a TDF constructor, C, as MAP[C] and the representation of a SORT, S, as REPR[S] then:

REPR[C(P1, ..., Pn)] = MAP[C](REPR(P1), ..., REPR(Pn))

Any mapping has to preserve the equivalences of the abstract algebra, such as those exemplified by the transformations {A} - {Z} in section 11.1. Similarly, the mappings of any predicates on the constructions, such as those giving "well-formed" conditions, must be satisfied in terms of the mapped representations.

In common with most homomorphisms, the mappings of constructions can exhibit more equivalences than are given by the abstract algebra. The use of these extra equivalences is the basis of most of the target-dependent optimisations in a TDF translator; it can make use of "idioms" of the target architecture to produce equivalent constructions which may work faster than the "obvious" translation. In addition, we may find that may find that more predicates are satisfied in a mapping than would be in the abstract algebra. a particular concrete mapping might allow more constructions to be well-formed than are permitted in the abstract; a producer can use this fact to target its output to a particular class of architectures. in this case, the producer should produce tdf so that any translator not targeted to this class can fail gracefully.

giving a complete mapping for a particular architecture here is tantamount to writing a complete translator. however, the mappings for the small but important sub-algebra concerned with offsets and alignments illustrates many of the main principles. what follows is two sets of mappings for disparate architectures; the first gives a more or less standard meaning to alignments but the second may be less familiar.

14.1. Model for a 32-bit standard architecture

Almost all current architectures use a "flat-store" model of memory. There is no enforced segregation of one kind of data from another - in general, one can access one unit of memory as a linear offset from any other. Here, TDF ALIGNMENTs are a reflection of constraints for the efficient access of different kinds of data objects - usually one finds that 32-bit integers are most efficiently accessed if they start at 32 bit boundaries and so on.

14.1.1. Alignment model

The representation of ALIGNMENT in a typical standard architecture is a single integer where:

REPR[{ }] = 1
REPR[{bitfield}] = 1
REPR[{char_variety}] = 8
REPR[{short_variety}] = 16

Otherwise, for all other primitive ALIGNMENTS a:

REPR[{a}] = 32

The representation of a compound ALIGNMENT is given by:

REPR[A xc8 B] = Max(REPR[A], REPR[B])
i.e. MAP[unite_alignment] = Max

while the ALIGNMENT inclusion predicate is given by:

REPR[A ... B]= REPR[A] xb3 REPR[B]

All the constructions which make ALIGNMENTs are represented here and they will always reduce to an integer known at translate-time. Note that the mappings for xc8 and ... must preserve the basic algebraic properties derived from sets; for example the mapping of xc8 must be idempotent, commutative and associative, which is true for Max.

14.1.2. Offset and pointer model

Most standard architectures use byte addressing; to address bits requires more complication. Hence, a value with SHAPE POINTER(A) where REPR[A)]xb9 1 is represented by a 32-bit byte address.

We are not allowed to construct pointers where REPR[A] = 1, but we still have offsets whose second alignment is a bitfield. Thus a offsets to bitfield are represented differently to offsets to other alignments:

A value with SHAPE OFFSET(A, B) where REPR(B) xb9 1 is represented by a 32-bit byte-offset.

A value with SHAPE OFFSET(A, B) where REPR(B) = 1 is represented by a 32-bit bit-offset.

14.1.3. Size model

In principle, the representation of a SHAPE is a pair of an ALIGNMENT and a size, given by shape_offset applied to the SHAPE. This pair is constant which can be evaluated at translate time. The construction, shape_offset(S), has SHAPE OFFSET(alignment(s), { }) and hence is represented by a bit-offset:

REPR[shape_offset(top())] = 0
REPR[shape_offset(integer(char_variety))] = 8
REPR[shape_offset(integer(short_variety))] = 16
.... etc. for other numeric varieties
REPR[shape_offset(pointer(A))]= 32
REPR[shape_offset(compound(E))] = REPR[E]
REPR[shape_offset(bitfield(bfvar_bits(b, N)))] = N
REPR[shape_offset(nof(N, S))] = N * REPR[offset_pad(alignment(S), shape_offset(S))]
        where S is not a bitfield shape

Similar considerations apply to the other offset-arithmetic constructors. In general, we have:

REPR[offset_zero(A)] = 0             for all A

REPR[offset_pad(A, X:OFFSET(C, D)) =
    ((REPR[X] + REPR[A] - 1) / (REPR[A])) * REPR[A] / 8
        if REPR[A] xb9 1 xd9 REPR[D] =1


REPR[offset_pad(A, X:OFFSET(C, D)) =
    ((REPR[X] + REPR[A] - 1) / (REPR[A])) * REPR[A]

REPR[offset_add(X:OFFSET(A, B), Y:OFFSET(C, D))] = REPR[X] * 8 + REPR[Y]
        if REPR[B] xb9 1 xd9 REPR[D] = 1


REPR[offset_add(X, Y)] = REPR[X] + REPR[Y]

REPR[offset_max(X:OFFSET(A, B), Y:OFFSET(C, D))] = Max(REPR[X], 8 * REPR[Y]
        if REPR[B] = 1 xd9 REPR[D] xb9 1
REPR[offset_max(X:OFFSET(A, B), Y:OFFSET(C, D))] = Max(8 * REPR[X], REPR[Y]
        if REPR[D] = 1 xd9 REPR[B] xb9 1


REPR[offset_max(X, Y)] = Max(REPR[X], REPR[Y])

REPR[offset_mult(X, E)] = REPR[X] * REPR[E]

A translator working to this model maps ALIGNMENTs into the integers and their inclusion constraints into numerical comparisons. As a result, it will correctly allow many OFFSETs which are disallowed in general; for example, OFFSET({pointer}, {char_variety}) is allowed since REPR[{pointer}] xb3 REPR[{char_variety}]. Rather fewer of these extra relationships are allowed in the next model considered.

14.2. Model for machines like the iAPX-432

The iAPX-432 does not have a linear model of store. The address of a word in store is a pair consisting of a block-address and a displacement within that block. In order to take full advantage of the protection facilities of the machine, block-addresses are strictly segregated from scalar data like integers, floats, displacements etc. There are at least two different kind of blocks, one which can only contain block-addresses and the other which contains only scalar data. There are clearly difficulties here in describing data-structures which contain both pointers and scalar data.

Let us assume that the machine has bit-addressing to avoid the bit complications already covered in the first model. Also assume that instruction blocks are just scalar blocks and that block addresses are aligned on 32-bit boundaries.

14.2.1. Alignment model

An ALIGNMENT is represented by a pair consisting of an integer, giving the natural alignments for scalar data, and boolean to indicate the presence of a block-address. Denote this by:

(s:alignment_of_scalars, b:has_blocks)

We then have:

REPR[alignment({ })] = (s:1, b:FALSE)
REPR[alignment({char_variety}) = (s:8, b:FALSE)
... etc. for other numerical and bitfield varieties.
REPR[alignment({pointer})] = (s:32, b:TRUE)
REPR[alignment({proc})] = (s:32, b:TRUE)
REPR[alignment({local_label_value})] = (s:32, b:TRUE)

The representation of a compound ALIGNMENT is given by:

REPR[A xc8 B] = (s:Max(REPR[A].s, REPR[B].s), b:REPR[A].b xda REPR[B].b)

and their inclusion relationship is given by:

REPR[A ... B] = (REPR[A].s xb3 REPR[B].s) xd9 (REPR[A].b xda REPR[B].b)

14.2.2. Offset and pointer model

A value with SHAPE POINTER A where REPR[A].b is represented by a pair consisting of a block-address of a scalar block and an integer bit-displacement within that block. Denote this by:

(sb:scalar_block_address, sd:bit_displacement)

A value with SHAPE POINTER A where REPR[A].b is represented by a quad word consisting of two block-addresses and two bit-displacements within these blocks. One of these block addresses will contain the scalar information pointed at by one of the bit-displacements; similarly, the other pair will point at the block addresses in the data are held. Denote this by:

(sb:scalar_block_address, ab:address_block_address,
 sd:scalar_displacement, ad:address_displacement)

A value with SHAPE OFFSET(A, B) where REPR[A].b is represented by an integer bit-displacement.

A value with SHAPE OFFSET(A, B) where REPR[A].b is represented by a pair of bit-displacements, one relative to a scalar-block and the other to an address-block. Denote this by:

(sd:scalar_displacement, ad:address_displacement)

14.2.3. Size model

The sizes given by shape_offset are now:

REPR[shape_offset(integer(char_variety))] = 8
... etc. for other numerical and bitfield varieties.
REPR[shape_offset(pointer(A))] = (REPR[A].b) ? (sd:64, ad:64) : (sd:32, ad:32)
REPR[shape_offset(offset(A, B))] = (REPR[A].b) ? 64 : 32)
REPR[shape_offset(proc)] = (sd:32, ad:32)
REPR[shape_offset(compound(E))] = REPR[E]
REPR[shape_offset(nof(N, S))] = N * REPR[offset_pad(alignment(S)), shape_offset(S))]
REPR[shape_offset(top)] = 0

14.2.4. Offset arithmetic

The other OFFSET constructors are given by:

REPR[offset_zero(A)] = 0              if REPR[A].b
REPR[offset_zero(A)] = (sd: 0, ad: 0) if REPR[A].b

REPR[offset_add(X:OFFSET(A, B), Y:OFFSET(C, D))] = REPR[X] + REPR[Y]
        if REPR[A].b xd9 REPR[C].b
REPR[offset_add(X:OFFSET(A, B), Y:OFFSET(C, D))] =
    (sd:REPR[X].sd + REPR[Y].sd, ad: REPR[X].ad + REPR[Y].ad)
        if REPR[A].b xd9 REPR[C].b
REPR[offset_add(X:OFFSET(A, B), Y:OFFSET(C, D))] =
    (sd:REPR[X].sd + REPR[Y], ad:REPR[X].ad)
        if REPR[A].b xd9 REPR[C].b

REPR[offset_pad(A, Y:OFFSET(C, D))] = (REPR[Y] + REPR[A].s - 1) / REPR[A].s
        if REPR[A].b xd9 REPR[C].b
REPR[offset_pad(A, Y:OFFSET(C, D))] =
    (sd:(REPR[Y] + REPR[A].s - 1) / REPR[A].s, ad:REPR[Y].ad)
        if REPR[C].b
REPR[offset_pad(A, Y: OFFSET(C, D))] =
    (sd:(REPR[Y] + REPR[A].s - 1) / REPR[A].s, ad:0)
        if REPR[A].b xd9 REPR[C].b
REPR[offset_max(X:OFFSET(A, B), Y:OFFSET(C, D))] = Max(REPR[X], REPR[Y])
        if REPR[A].b xd9 REPR[C].b
REPR[offset_max(X:OFFSET(A, B), Y:OFFSET(C, D))] =
    (sd:Max(REPR[X].sd, REPR[Y].sd), ad:Max(REPR[X].a, REPR[Y].ad))
        if REPR[A].b xd9 REPR[C].b
REPR[offset_max(X:OFFSET(A, B), Y:OFFSET(C, D))] =
    (sd:Max(REPR[X].sd, REPR[Y]), ad:REPR[X].ad)
        if REPR[A].b xd9 REPR[C].b
REPR[offset_max(X:OFFSET(A, B), Y:OFFSET(C, D))] =
    (sd:Max(REPR[Y].sd, REPR[X]), ad: REPR[Y].ad)
        if REPR[C].b xd9 REPR[A].b

REPR[offset_subtract(X:OFFSET(A, B), Y:OFFSET(C, D))] = REPR[X] - REPR[Y]
        if REPR[A].b xd9 REPR[C].b
REPR[offset_subtract(X:OFFSET(A,B), Y:OFFSET(C, D))] =
    (sd:REPR[X].sd - REPR[Y].sd, ad:REPR[X].ad - REPR[Y].ad)
        if REPR[A].b xd9 REPR[C].b
REPR[offset_add(X:OFFSET(A, B), Y:OFFSET(C, D))] = REPR[X].sd - REPR[Y]
        if REPR[A].b xd9 REPR[C].b
.... and so on.

Unlike the previous one, this model of ALIGNMENTs would reject OFFSETs such as OFFSET({long_variety}, {pointer}) but not OFFSET({pointer}, {long_variety}) since:

REPR[{long_variety} ... {pointer}] = FALSE


REPR[{pointer} ... {long_variety}] = TRUE

This just reflects the fact that there is no way that one can extract a block-address necessary for a pointer from a scalar-block, but since the representation of a pointer includes a scalar displacement, one can always retrieve a scalar from a pointer to a pointer.

cxxxv. Conclusion

This commentary is not complete. I have tended to go into considerable detail into aspects which I consider might be unfamiliar and skip over others which occur in most compiling systems.

The philosophy underlying the whole TDF approach to portability is that of separation or isolation. This separation of the various components of the compilation system means that to a large extent they can be considered independently. The separation is only possible because the definition of TDF has mechanisms which facilitate it - primarily the token mechanism, but also the capsule linkage scheme.

The most important separation is that of the abstract description of the syntactic aspects of the API, in the form of the target independent headers, from the API implementation. It is this which enables the separation of target independent from target dependent code which is necessary for any Architecture Neutral Distribution Format. It also means that programs can be checked against the abstract API description, instead of against a particular implementation, allowing for effective API conformance testing of applications. Furthermore, it isolates the actual program from the API implementation, thereby allowing the programmer to work in the idealised world envisaged by the API description, rather than the real world of API implementations and all their faults.

This isolation also means that these API implementation problems are seen to be genuinely separate from the main program development. They are isolated into a single process, TDF library building, which needs to be done only once per API implementation. Because of the separation of the API description from the implementation, this library building process also serves as a conformance check for the syntactic aspects of the API implementation. However the approach is evolutionary in that it can handle the current situation while pointing the way forward. Absolute API conformance is not necessary; the TDF libraries can be used as a medium for workarounds for minor implementation errors.

The same mechanism which is used to separate the API description and implementation can also be used within an application to separate the target dependent code from the main body of target independent code. This use of user-defined APIs also enables a separation of the portability requirements of the program from the particular ways these requirements are implemented on the various target machines. Again, the approach is evolutionary, and not prescriptive. Programs can be made more portable in incremental steps, with the degree of portability to be used being made a conscious decision.

In a sense the most important contribution TDF has to portability is in enabling the various tasks of API description, API implementation and program writing to be considered independently, while showing up the relationships between them. It is often said that well specified APIs are the solution to the world's portability and interoperability problems; but by themselves they can never be. Without methods of checking the conformance of programs which use the API and of API implementations, the APIs themselves will remain toothless. TDF, by providing syntactic API checking for both programs and implementations, is a significant first step towards solving this problem.