Skip to Content

Sensor Activities on the Virginia Coast Reserve

Printer-friendly versionPrinter-friendly version
Issue: 
Fall 2012

John Porter (VCR)

Since the SensorNIS workshop, sensor activities at the VCR/LTER have revolved around several different activities.

First, the deployment of some new sensing systems including a radar-based tide gauge was deployed on Hog Island. This gauge sends data wirelessly using a Campbell Scientific CR206, our main "workhorse" logger these days, to a network node on the north end of Hog Island and then sent via Wi-Fi back to our lab.  Additionally we have been deploying networks of autonomous sensors that are too isolated to reach our existing network backbone.  These include ground water monitoring stations on Smith and Metompkin Islands, and a network of tipping-bucket rain gauges deployed along the Delmarva Peninsula.  Although all these stations could be accessed wirelessly, the cost of doing so would be high; so at least for now they will be dumped manually. Most of these stations are deployed where they can be reached fairly easily by car or after a short boat ride. 

Second, we have been taking a harder look at how to handle reporting of sensor problems and the creation of level-1 datasets that have more advanced QA/QC and data flagging.  We are moving away from a system that had two fundamental data forms to one that uses three fundamental forms.  The "two-form" system had raw data in the form it came in directly from the sensor, typically as a text file. This was then processed to create a rudimentary level-1a dataset by ingesting the data and doing basic data type and range checks.  Any additional corrections, such as correcting clock errors or sensor calibration errors, were made by altering the level-1a product using a program. The downside of this model is that post-hoc corrections need to be very carefully applied because once a number has been corrected (e.g., multiply by 2), you don't want to accidentally run the same correction again (e.g., multiply by 2 again, yielding a multiplication by 4 of the original data).  In the "three-form" model no post-hoc corrections are applied to the low-level level-1a data table. Instead, a program reads the level-1a data and applies post-hoc corrections to produce a level-1b dataset.  The level-1b dataset is repeatedly recreated by the program, re-applying the needed corrections.  It is possible to eliminate the intermediate (level-1a) data if you go directly from the raw data to the corrected data using a program that re-ingests and re-applies corrections. However, many of our datasets have gone through a progression of raw forms, many of them incompatible, so that there is a significant advantage to processing them after ingestion. 

We have also been working on a database for reporting sensor problems. The draft web forms allow users to select a type of station, identify a particular station and the sensors affected, and to recommend actions to be taken.  The database will then be used to automatically write corrective code to flag or remove problem data.