Table of Contents

Name

logfile - EPL system log file format (ERP files)

Description

EPL System Log File FormatJ. C. Hansen

Log File Structure and Function

Log files are created during digitization and stored in a disk file. The information in a log file summarizes the events, their times of occurrence, and the condition code for each entry. There is also a single byte which can be used to hold 8 separate flags during subsequent processing. Here is the "C" language structure that is used to deal with log entries in a log file:


#define DELTMRK 0160000            /* delete mark */
#define PAUZMRK 0140000            /* pause mark */
struct log {
    int evntno;        /* neg. is deleted */
    int clkthi;        /* clock hi ticks */
    int clktlo;        /* clock lo order ticks */
    char ccode;        /* condition code */
    char flags;        /* flags  */
};

The clock time is in terms of samples or sampling clock ticks; the rate of sampling is not stored in the log file (although it probably should have been). The clock time is stored as two integers because of the inconsistencies which existed at one time between PDP11 floating point long integer formats and "C" compiler long integer formats. Log files are not prefixed by a header, unlike the raw data files. Each entry is 8 bytes long, and the first entry in a log file starts at offset 0 in the file. This arrangement has a slight advantage in terms of calculating the offset of an entry given its ordinal position, or item number. It also makes it easy to buffer and process log files because 64 entries fit in a 512 byte block.

Event numbers

Certain event numbers are not associated with experimental events per se, but rather denote pauses in the collection of data during digitizing, and demarcate segments that are to be altered (e.g. deleted) in subsequent analyses. For this reason, the most significant three bits in the event number should not be used by bona fide events. As is shown in the "log.h" file above, the two special events PAUZMRK and DELTMRK employ these bits and denote points in the data where a pause occurred or a pause with a "delete segment" request was made, respectively. The DELTMRK indicates that a pause has occurred, but that the experimenter wishes to delete all the events in the log file from the last pause or delete point up to the current DELTMRK entry.

Deleted Events

The deletion of events in a log file does not entail an actual removal of the offending entries, but rather the flagging of the event numbers in the log file using the most significant bit of the event number. This method has the advantage that one can subsequently "undelete" those events (at least theoretically). This flagging of to-be-deleted events occurs after digitization is complete; the log file is processed to mark deleted segments (this is colloquially known as "cooking" the log file). Note that both PAUZMRK and DELTMRK events fall into the category of deleted events. Programs which subsequently process data or use the log file are responsible for noting that events with negative event numbers are deleted events. The log file, however, is not the only mechanism by which events can be deleted. There are a number of reasons why an event may not be used during averaging or procedures which employ the raw data file associated with the log file. Any errors on the raw data tape can cause the loss of specific events, and artifact rejection allows data-specific deletion of events.

Relation of Log Files to Raw Files

As mentioned in the document on raw data formats, raw files are more likely to contain errors than are log files, especially if the raw data are stored in a magtape file. For this reason, any program which processes the raw data should use the log file to verify the validity of events. This is best accomplished by checking not only the event number, but also the clock times (see the document on raw data formats). This approach is used in the current implementation of the averaging programs. In addition, one should make valiant attempts to rematch the log and raw files by reading raw file records until an error-free record (hardware-wise) has been acquired, and then reading log entries in the forward direction. Further details on rematching can be found in the program source code.

Expected Modifications

The most painful aspect of the current format of log files is the lack of information regarding the sampling rate. As a result of this oversight, a number of programs require the user to enter the sampling rate that was employed when the data were recorded. The simple addition of this information to the log files would simplify a use of these programs as well as relieve the user of the task of remembering the sampling rate. On approach would be to prefix the log files with a header. On the other hand, it may be simpler to make the first log entry a dummy entry (e.g. a PAUZMRK) and store the sampling rate, in terms of the period in units of 10’s of microseconds, in the clock high order entry. In either case, substantial program changes will be required, and retrograde compatability will be difficult to maintain. Yuck.


Table of Contents