Submit your data

Edteva — our former tidal processing program

This program provided BODC with tidal processing capabilites from 1993 to 2005. It has since been superseded by Edserplo but is described here for historical completeness.

The following assumes you have knowledge of software programming techniques.

Introduction to Edteva

Since 1992, BODC have been responsible for the management of data collected from the UK Tide Gauge Network. To process these data efficiently, BODC developed Edteva — a program for editing, visualisation and analysis of tidal data.

At the time, UK Tide Gauge Network data were collected using DATARING (Palin and Rae, 1987). The Tidal Elevation Reduction Package (TERP) had been developed at the Proudman Oceanographic Laboratory (POL) in the 1970s and extended in the 1980s to cater for the introduction of DATARING. TERP ran on a mainframe and this, combined with its poor performance and lack of editing capability, drove the development of a replacement, TEVA (Tides: Editing, Visualisation and Analysis).

Edteva, the principal program within the TEVA package, was developed primarily to handle DATARING data but the design was generalised to do routine reduction, ad hoc manipulation and analysis. The user interface and time series display came from Serplo (Lowry and Loch, 1995), a BODC program for visualising depth and time series on graphics workstations. Edteva featured six 'pages'

  • Introductory
  • Port/Channel selection
  • Colour Adjustment
  • Time Series Display
  • Output Control
  • Analysis Retrieval and Display

Functionality

Edteva incorporated the following functionality

Input

  • Arbitrary data sampling (down to one second)
  • 11 BODC input formats
  • Automatic format recognition
  • Up to 13 dependent parameters and 50 ports could be processed at one time
  • Cross referencing of ports and parameters
  • Generation of residuals and tide at load time

Output

  • Seven output formats
  • Uniform quarter-hourly, hourly and daily value generation
  • Removal of overlaps
  • Automatic generation of file names
  • Arbitrary data spans
  • Span dissection
  • Data storage
  • A dump file retained data (excepting analyses and statistics) between Edteva sessions
  • Dump file locking allowed the sharing of data between users without corruption
  • Analyses could be stored and retrieved from an Oracle relational database

Data display

  • User assignment of parameters to axis
  • Horizontal zooming (rapid changes of scale)
  • Panning in the horizontal and the vertical
  • Overlay of ports and channels differentiated by colour
  • Selection by input file
  • Individual data point values

Editing

  • Flag editing through box encapsulation
  • Automatic flagging of overlaps
  • Value editing by replacement, interpolation and gap filling
  • Addition and multiplication by a constant value
  • Time shifting and time stretching
  • Update of derived channels where appropriate

Analysis

  • POL's harmonic analysis (Murray, 1964)
  • Automatic selection of constituent set allied to a user-specified override
  • Arbitrary data span
  • Span dissection into years or months
  • Comparison of up to 50 analyses with constituents and analyses selectable
  • User storable comments
  • Storage in a Oracle relational database
  • Automatic recomputation of derived channels with latest analysis

Statistics

  • Generation of monthly statistics, including extremes, surges and daily mean sea level (MSL)
  • Tide gauge downtime (history)
  • Output stored in an Oracle relational database with locking to prevent inadvertent update

Productivity and performance

Normally, it took some 1.5 - 2 person days per week to accomplish the routine reduction of DATARING data from 45 gauges. In 1992 it required four people, not all full time admittedly, to process 35 ports. An analysis on a port year of hourly data took nine seconds on a Silicon Graphics Indigo R3000.

Design philosophy

Edteva was designed to take advantage of the increased power and display capabilities of graphics workstations. Instead of having functionality divided amongst a large number of different programs as in TERP, it was possible, because of increased memory availability, to combine much of it into a single interactive program. This, when linked with very fast display graphics (typically >200,000 vectors per second), vastly improved staff productivity.

While editing was limited to the data of a single port at any one time, most other operations could be applied across a number of ports and or channels. The software operated on the basis of the user selecting an appropriate set of ports and/or channels before invoking the commands of interest.

Data characterisation

Edteva operated with data series — repeated sets of readings of a fixed number of channels. Each series was associated with only one port.

Series need not be uniformly sampled, e.g. DATARING gauges normally sample every 15 minutes but are switched to a higher sampling frequency at times of maintenance.

Series may have readings which overlap in time with those of other series for the same port. This is the norm for DATARING because the full contents of the gauge's circular buffer store, which may have a capacity of 10 days, is retrieved every week.

Overlaps of this nature were eliminated for the purposes of output or analysis by Edteva generating a temporary composite series.

Within Edteva each data value had an associated one byte flag value. This was normally blank. It was used to indicate whether the data value was null, interpolated, or suspect. To suppress overlapping data the flag was set to 'x'. Some formats, notably BODC's QXF to which Edteva interfaced, support the use of this flag.

Data loading and the dump file

All data resided in the program's virtual memory and loading of data from files was confined to program start-up.

At termination all the data would normally be written out to a dump file which could then be accessed by the next Edteva session. The names of files to be read, apart from the dump file, were listed in a driver file. The dump file retained port, channel and other setting information, as well as the tidal data set.

The programming language was Fortran 77 and Fortran 90. Fortran 90 allows main arrays to be dynamically assigned. When used for DATARING, Edteva typically operated with two months of data for each port. To allow new data to be added the user had to delete the (earlier) data using the program's delete function. A typical Edteva command line was

edteva driver dump.dmp -wabc

where -w write locks the file for user with initials abc.

Port and channel (porch) identification

Port names were represented by 20 characters or less and were identified by a single positive integer within Edteva. The numbering convention adopted was

  • 1 - 99,999 for the UK Tide Gauge Inspectorate (TGI) ports
  • 100,001 - 199,999 code for Global Sea Level Observing System (GLOSS) ports
  • 200,001 - 299,999 code for World Ocean Circulation Experiment (WOCE) ports

Further sequences could be defined. Depending on the format, ports may be identified by name or by number within the input data.

To allow for the automatic generation of files, the user could define three character port abbreviations to be used with the output file name template. These abbreviations appeared in a user defined port file (the first porch file) which was identified in the driver file using a suitable syntax.

An associated channel file (the second porch file) was used to specify the generation of those channels not present in the data, e.g. tide, residuals and differences. It was also used to identify input channels where the format (e.g. DATARING) is not self-descriptive.

When combining data from different sources, the user ran into the problem of different naming conventions to identify essentially the same channel. Edteva allowed the user to specify channel aliases within the driver file, utilising a suitable syntax, to overcome this problem.

Initiating output, statistics and analysis

For output the user selected the

  • output format
  • data span (start and end dates/times)
  • span dissection (how the span is divided — calendar months, years or decades, or not at all)
  • sampling interval for uniform output if required (15 minutes to one day)
  • channels and ports
  • the file name template

Most of this user dialogue was conducted via a pop-up menu on the Output page. The output operation was then initiated. If there were overlaps in the underlying data then the output process was aborted for the port concerned. These overlaps could be automatically flagged by the program.

To compute tidal analyses and monthly statistics the user defined a span, and for analyses, a span dissection. The monthly statistics were written straight to the database and the analyses were held in the analysis buffer for inspection on the Analysis page.

Display capabilities

Selection of the data on the time series page occured in two places.

The interface was equipped with toggles (F9 and F11 buttons) to go between

  • a single port and all those ports selected on page one
  • a single channel and all those channels selected, also on page one

Channels were plotted on specific horizontal axes using colour to differentiate those channels on the same axis.

Each port's data consisted of a set of possibly overlapping series. Pressing F10 toggled between all the series for a port and a single series.

Data editing operations were performed on the time series page in single series mode. Some operations also required single-channel mode. In port-differential mode and series-differential mode colour was used to identify the ports and series respectively.

The horizontal scale could be zoomed very quickly, from years to seconds and back, by pressing the 'i' and 'o' buttons. The expansion or contraction was centred on the cursor, making it easy to examine points of interest.

The display could be panned sideways and vertically using the arrow keys and horizontally using mouse button clicks. Vertical scales were manipulated and displayed on page one.

As tidal ranges vary considerably, Edteva allowed the user the option of defining channel scales on a port-by-port basis. What was actually used or displayed were the values for the current port.

Editing

Data editing took place on the time series page and takes two forms.

Flag editing was limited to changing flag values. Flags were set or unset (set blank) through middle-mouse clicks. The data values affected are specified by an encapsulating box or the cycle cursor.

Value editing was initiated through a pop-up menu. It included options for

  • gap filling through interpolation
  • data cycle deletion
  • time shifting and stretching
  • the operations of addition and multiplication
  • unit conversions (e.g. feet/metres)
  • datum shifts

Interpolations were automatically flagged.

Tidal analysis

The Analysis page displayed the analysis buffer which could contain up to 50 harmonic analyses, each of 140 constituents.

The analyses' constituents were interleaved vertically for easy comparison and the user had the option of toggling between subsets of analyses and constituents to obtain the desired juxtaposition. E.g. you can display just M2 and S2 for all ports.

Differentiation of the constituents in the column was through background colour coding. An analogue display of the differences between constituents for a set of analyses could be toggled to if desired. These differences were computed as distances between the constituents treated as numbers in the complex plane.

The page was equipped with options to store and retrieve analyses from the BODC's Oracle database and from POL Applications' database of standard analyses. Thus it was very easy to see how the newly derived analyses compare with the standard.

Analyses could be tagged with user comments. Information displayed included port, span, parameter names, date/time of analysis, standard deviation of data and residuals.

Having generated an analysis it was possible to regenerate the residuals and other derived information. This could be done automatically using the last generated port analysis. This recalculation was also performed automatically on turning to the time series page.

Inhibitory factors for the uptake of Edteva

Edteva was a powerful tool for tidal processing and might usefully form the basis of further developments in this field. In this section we outline factors which inhibited the use of the current version of Edteva outside POL.

The system had been written in Fortran 77 and Fortran 90 (nearly 25,000 lines) utilising the obsolescent Silicon Graphics Library IRIS (GL) and ran on Silicon Graphics (Unix) workstations. Today's comparable library is OpenGL.

The set of input formats included

  • POL's Basic Card-image (Metric) format. This could be used for entering data, where there was no more than a single dependent channel.
  • The Antarctic Circumpolar Current Levels by Altimetry and Island Measurements (ACCLAIM) format. This could be used for entering data where there was more than one such channel.

The Oracle relational database was used to store statistics and analyses. If the user is not interested in these aspects the lack of Oracle will not matter.

Edteva monitored the accessibility of Oracle every five minutes, reporting the result through the Output and Analysis pages.

Computation of analyses, as opposed to statistics (there is no statistics buffer), is not inhibited by the lack of Oracle. However, obtaining suitable analysis output in Oracle's absence was a problem. This is not the case for a small number of analyses as the output is simultaneously listed in the invoking window.

Edteva was also integrated with the POL Applications' (now NOC Applications) database (implemented as a set of random access files) on which POL's standard analyses were stored.

The screen could be captured. However, the quality of the resulting hardcopy left much to be desired, particularly in the case of the Analysis Retrieval and Display page.

Replacement

Edteva was superseded by Edserplo in 2005.

Further information

Email Steve Loch.

References

Lowry, R.K. and Loch , S.G., 1995. Transfer and Serplo: powerful data quality control tools developed by the British Oceanographic Data Centre. Geological Data Management, Geological Society Special Publication No. 97, 09-115.

Murray M.T., 1964. A general method for the analysis of hourly heights of the tide. International Hydrographic Review, 41(2), 91-101.

Palin R.I.R. and Rae J.B., 1987. Data transmission and acquisition systems for shore-based sea-level measurements. In: Fifth International Conference on Electronics for Ocean Technology, 1-6, 1987 Heriot-Watt University, Edinburgh. Institution of Electronic and Radio Engineers, London, IERE publication #72.