F-POD Training

F-PODs + the software find the click trains made by toothed whales – dolphins, porpoises, beaked whales, pilot whales and all other odontocetes except the Sperm Whale.

The free F-POD app makes this data available to view, filter, analyse and export.

The F-POD stores the data on an SD card and the app can read that and convert it from a .CHE file to a .FP1 file, and later to a .FP3 file in which the trains are identified.

 

A summary list of supporting documents, tech info, powerpoints etc is here:

 

Useful Docs

1. Reading the SD card

This doc gives a quick guide:
How to read the SD card

Key points:
1. Put a specific site name in the Site Name box and don't add dates or POD numbers.
2. Have a folder 'New FP1 files' for ALL new FP1 files - separate folders waste time and have worse problems

2. Naming your files

Important points:
1. Put a specific site name in the Site Name box. Don't add dates or POD numbers. See Page 2 of the F-POD software manual.

2. If a site is more than 200m from a site you have already named then give it a new name, perhaps by adding a number to the name if it is close.

The reason is that cetacean use of sites varies over these distances, and in long term trend monitoring it is very helpful - in practice it is essential - to keep sites constant.

3. Cropping your files

Horrible classification errors occur in data when the POD was not immersed - it's often picking up radio, laptop noise etc. and some of that resembles dolphins!
So periods when the POD was not in the water, or when the deploying boat's sonar was being detected should be 'cropped' producing a file with 'PART 128d 9h 1m' at the end of the name.

That is now the definitive record of that deployment.
See this video on cropping.

4. Finding the cetaceans

This is where the F-POD makes life easier! The KERNO-F classifier finds click trains - the sequences of similar clicks at similar intervals - that cetaceans make, and puts those clicks into an FP3 file.

See Page 2 of the F-POD software manual.

This can be run on the original FP1. It should always be run on the cropped files later to avoid any bias hanging over from the horrible 'out-of-water' data!

5. Is this file 'useful'?

If the POD orientation was badly wrong, or a boat with a loud sonar anchored nearby for a month, or the file is only a few hours long etc. etc. the file may not be useful for your quantitative study.

When you decide a file is useful put both the cropped FP1 and FP3 files in a folder called 'useful files'. That becomes the definitive source for all your subsequent work.

Having all the files in one folder saves time and avoids the nasty problem of duplicates appearing ... somehow ... messing up your analyses ... and only being discovered horribly late! This is the voice of experience talking ...

6. Are you the Data Manager?

... that person is so valuable! Every project should have a named Data Manager. If yours is a one person project, it's you!

The Data Manager handles where the files are stored, does the cropping and probably the validation, and manages the backup / archiving of your valuable data.

7. Understanding the soundscape

This is fun! Thinking about the noise levels, sound frequencies, the POD's angle to vertical, the temperature record and the train detections you can build up a picture of what was happening.

Storms, tidal patterns, boat activity, sea bed type etc all contribute heavily.

8. Recognising boat sonars

Boat sonars are an issue for all click classifiers because the sonar is just man-made echo-location and it also makes trains fo ultrasonic clicks that echo between sea surface and bed and get picked up after various distortions by the POD.

Fortunately there is a hall mark of boat sonars - a nearly flat line of inter-click-intervals seen across more than 1 minute in the FP1 file.

9. Validation of a file

This is a significant skill to acquire, but very interesting because it is a route to new discoveries about cetacean click trains and about other sources of click trains.

Two active areas are social click trains made by cetaceans and WUTS - weak unknown train sources.

This paper gives a basic outline of validation methods currently in use.

There are file warnings that give a good indication of whether a file has a significant risk of false postives. You can export these risk assessments from the Filters +files page.

10. What filters should I use?

The big problem here is False Positives. If you select filters that maximse the number of detections they will risk high levels of false positives in adverse conditions - and those may only appear later in your study and then mess up your results.

So Hi and Mod Quality trains for the species group of interest are recommended, and that is based on studying those adverse conditions.

For species group the 'high species confidence' filter is recommended on the same basis.

11. Which statistic should I use?

You may need two statistics: One for communicating to a wide public, and a second that gives the best performance for your statistical analysis.

For the first DPD per month or av DPH per day are good - 'the dolphins are here on half the days in May etc' or 'for 5 hours a day in May'.

For the second DPM/day is often good so long as it is not going above about 1/3.

What's happening at night?

Daily patterns are often surprising. In the rias near Chelonia's workshop in SW Britain dolphins are detected much more often at night than in the day when many boat sonars are active.

These outputs can be obtained very quickly from the data on screen using the Graphs button, and selecting your chosen statistics on the Disply page of the menu.

The values can be exporting for graphing in a spreadsheet.

Is there a trend over years?

When the same site has been monitored for more than 2 years a trend can be obtained using the PYRA method that is built into the FPOD app and described in the FPOD software manual.

GAMMS can give misleading results when there is gappy data combined with strong seasonal patterns but good old PYRA keeps plodding along and gets it right!

Is there an impact from some event?

Impact assessments have often used the 'BACI' method - Before/After/Control/Impact - that compares the change at the impact site with the change at a nearby control site.

If monitoring strats well before the impact, the detailed data sets obtained allow a more powerful approach: virtual impacts can be postulated at many times before any real impact and the spurious impact can be assessed to give a distribution of change levels from no impact. This can be compared with the change that coincides with the real impact.

Finding social calls

F-POD data is showing that click-based social communication is commoner than we thought and includes a bigger range of click-rates. This is an area of active study.

A good start is to search for short screens (say 15s long) with more than say 500 clicks in the FP3. Look at the clicks/s graph (PRF pulse repetition frequency) for repeated click rate profiles.

But! there are also repeated very low click rates from porpoises that are likely social.

Distinguishing species

The KERNO-F classifier only groups cetaceans into NBHF or others, but more can be done...

Knowing the position of a click in a train helps resolve the effect of click axis on received click kHz, and the position of a train in an encounter helps resolve the effect of distance on received click kHz.

The Third Party Export options in the FPOD app give you access to all that and a lot more (the rich set of derived features elaborated by the KERNO-F classifier) for you own classifier, and also the ability to write your classification back into the data file.

Does the high sampling rate help anything?

The F-POD samples at 1m/s and upsamples this to 4m/s. This gives very precise wave period measurements which contribute to the world-beating performance of the KERNO-F classifier.

They also enable us to see the time evolution of elements within clicks that may not be subject to direct evolutionary selection and may consequently vary more between species than the spectral features that may be subject to convergent evolution working on their echo functionality.

Can we integrate our sightings records?

Yes! Please do!

This is done via a spreadsheet: POD-sightings-entry.xlsx

The species text can then be displayed in high-resolution displays and the data can be filtered by the species present in the minute.

Video replay of spatial patterns

There is an array viewer that can show patterns of occurrence in a grid of stations that you monitored over a period of months or years.

The original POD data is compressed into an 'array file' that can be read at speed to allows this display.

Creating engaging Audio replays

For engaging community interest graphs my be almost counter-productive ... so do try experimenting with the 'play' button in high-resolution displays.

The audio created can be saved as a 'wav file and you can convert that online to an .mp3.

In these files the click rate profile is transformed in a linear way into a pitch profile, and some of these have a fun engaging quality.

How long are the dolphin visits?

The F-POD app has an easy way to answer this - it's the auto-correlation function that is accessed via the Display page.

Set the appropriate species filter and chose 1 or 5 minute time units for 'residency time'.

To see if there are patterns corresponding to day or tide length, pick 6h or 'tidal' and see if the ac.f peaks correspond to successive 4th time lines.

Marking and editing trains

Trains can be marked, individually or all those in a minute, or their cetacean status removed. Then you can filter these out or view only them by clicking repeatedly on the 'marked trains included' text on the left of the main screen.

BUT: it's not a good idea to edit out (or in) trains that you think were bad or good as this is liable to be a very big task and your efforts will be subjective.

SO: the alternative is to decide what level of error your main question can tolerate (i.e. it won't actually affect your conclusion) and then use the quantitative validation to measure what error rate you have. If it's tolerable, and it usually is, then do no editing.

Help with Pinger trial data

Pinger trials can be distorted if the pinger is audible to a POD and the characterisitics of the pinger are such as to create a bias against identification of a cetacean.

Should I worry about the non-independence of the classifier?

When the classifier detects a dolphin it is less ready to detect a porpoise. A sonar makes it less likely to detect a cetacean in the same minute, etc.

These interactions are generally not a worry unless the 'species' are tending to occur together as the fraction of minutes during which the bias might be active is generally small.

BUT: if you want to study these interactions, you do need to explicitly work around this bias and methods are available to do this.

Keeping track of interesting stuff

In the right-click pop-up menu of the software, there is an option to 'Save comment file'. When you see something you want to revisit or ask about, save one of these and give it a really meaningful name that you will be able to find again.

You can revisit what you saw by loading this file via the 'Filters +files' page and it will take you to the view you had. Or you can send us the FP1/FP3 files plus the comment file.

Getting help

This website is new and needs a lot of bits filled in.
If there are things missing that you need, please get in touch.

Please use the email contact link at the bottom of the page.

all feedback is gratefully recieved!