There are hundreds of web sites giving instructions on reading weather sensors with a Raspberry Pi, but i have developed additional software functions to overcome the shortcomings of the wind sensors.
The wind direction and speed sensors i have are inexpensive units produced as replacement units for Maplin weather stations.
While the speed sensor is adequate for the job, the direction sensor is light weight and has no mechanical damping, so in light winds it tends to spin round and point in all directions producing inconsistent results.
The anemometer i have uses a rotating magnet to open and close a reed switch which can be read by a digital input on a computer.
Most web pages favour using an interrupt system for measuring the period between wind speed reed switch operations to determine wind speed. I have chosen to take a 3 second sample of number of rotations of the anemometer
to produce an average windspeed.
As the Pi only reads the sensors every 60 seconds, i have plenty of free time for it to take a 3 second sample, this is how i think the Met office determine wind speed.
The wind direction sensor uses an array of reed switches and resistors to produce a varying voltage output when connected in series with a 10k external resistor, this gives 16 distinct voltage levels which can be converted into 16 points of the compass.
Because it is light weight and with no damping can give inconsistent results if only one reading is made. What my software does is to take 8 samples in a 2 second interval, and compare the results to determine the spread of the samples. If 4 or more of the samples are in the same direction (1/16) it will return the average of all the samples, if not, then it discards the samples and returns the previous average reading.
This average output along with the 8 samples are saved in what i call the fine resolution database every 60 seconds, this is then synchronized with the main desktop PC every 15 minutes. Data in this database is only kept for 24 hours before being culled, but provides raw data should i want to do some additional processing on the main pc.
Every 15 minutes a script in the main pc reads the last 15 one minute readings from the outstation database and produces an average and a gust reading for saving in the main database.
This can also produce a separate direction which corresponds with the maximum (gust) speed for this period.
All of the other readings are averages of the last 10 minutes, i don't believe it is necessary to have a sample period of less than 60 seconds as this would produce an awfull lot of data most of which will not be used.
I also run a UDP server on the outstation to facilitate access to live outstation data displaying the latest raw data.
The desktop pc runs several scripts every 15 minutes to produce the final weather web page before uploading to the server.
The whole of the web site is coded in HTML, CSS and PHP with no templates, this produces a unique website that i can change at any time if something doesn't look right.
One main python script integrates all of the data for the webpage, getting inputs from other scripts or programs to generate a static web page. Other scripts produce the tidal data, the moon data, the instrument image, weather forecast and cloud and visibility data.
With the exception of the tide data, which i use xtide, all the other data is from python scripts i have developed in house or ported from other code, as in the case of the sager weather lookup which was originally a java script by Naish666.
The instrument image is produced by using the python image library to take the data and draw needles onto a background image, the moon data is derived from the python ephem library with added full moon names
The cloud cover and visibility data comes from using the PIL library to scan over a webcam image looking for blue sky and comparing areas for contrast to determine mist and fog.