Creating Your Personal IoT/Utility Dashboard Using Grafana, Influxdb & Telegraf on a Raspberry Pi
- Last Updated: December 2, 2024
Neelabh Kumar
- Last Updated: December 2, 2024
<!-- wp:paragraph --> <p>This article explains how to set up a personal IoT & Utility dashboard.</p> <!-- /wp:paragraph -->
<!-- wp:heading {" />
Setting up a Raspberry pi is fairly easy following the official guide. Once you have set up the pi, you need to connect the sense Hat to the pi. The Raspberry Pi Sense HAT is attached on top of the Raspberry Pi via the 40 GPIO pins. Please follow this guide, it shouldnât take more than 5 minutes.
The way you want to use the dashboard depends completely on you. All the code that Iâm going to post here deals with the information I want to collect and the way I want it to be displayed. You need to change a few things to run it on your machine, and you have the full right to change it as you please.
I use Python3 to log the room temperature and humidity values from the sensors. I also use Open Weather API to get the outside weather conditions.
import time import logging import vcgencmd from sense_hat import SenseHat import requests import pytemperature sense = SenseHat() logging.basicConfig(filename='temperature.log', filemode='a', format='%(created)f %(message)s', level=logging.INFO)
while True: t = sense.get_temperature() h = sense.get_humidity() CPUc=vcgencmd.measure_temp() r = requests.get('http://api.openweathermap.org/data/2.5/weather?id="Your Location id"&APPID="Your API Key"') result=r.json() outsideTemp=pytemperature.k2c(result["main"]["temp"]) outsideHumid=result["main"]["humidity"] logging.info('Temp={0:0.1f} C and Humidity={1:0.1f}% and CPU_Temp={2:0.1f} and ot={3:0.1f} and oh={4:0.1f}%'.format(t, h, CPUc, outsideTemp, outsideHumid)) time.sleep(4)
You can use python3 to directly feed the data to InfluxDB, however, logging is a much neater way to do it. As you can see in the code, Iâm getting the room metrics using the SenseHat library and fetching it from the sensors. For outside weather conditions, Iâm using the open weather API. Please follow the steps below to get your own API key and Location ID
You can notice that Iâm using vcgencmd to get the CPU temp. This should log the data in temperature.log file. Try running it with python3 and check if the log file is being generated. It will look something like this:
1570130669.852521 Temp=24.8 C and Humidity=64.1% and CPU_Temp=50.5 and ot=11.6 and oh=93.0% 1570130674.022393 Temp=24.9 C and Humidity=63.5% and CPU_Temp=50.5 and ot=11.6 and oh=93.0% 1570130678.148942 Temp=24.8 C and Humidity=64.2% and CPU_Temp=49.9 and ot=11.5 and oh=93.0% 1570130682.303456 Temp=25.0 C and Humidity=64.1% and CPU_Temp=50.5 and ot=11.6 and oh=93.0%Setting up InfluxDB and Telegraf:
InfluxDB Setup:
In this step Iâll show you how to setup InfluxDB and Telegraf.
wget -qO- https://repos.influxdata.com/influxdb.key | sudo tee /etc/apt/sources.list.d/influxdb.list test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt-get update && sudo apt-get install influxdb
Once the installation is done, you need to start the service using:
sudo service influxdb start
Please check the status of Influx db using:
sudo service influxdb status
You should get something like this if everything is working:
â influxdb.service - InfluxDB is an open-source, distributed, time series database Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-10-03 23:05:15 EDT; 22h ago Docs: https://docs.influxdata.com/influxdb/ Main PID: 15832 (influxd) Tasks: 17 (limit: 2077) Memory: 110.8M CGroup: /system.slice/influxdb.service ââ15832 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
Telegraf Setup:
Download the latest armhf version for you pi from here and fetch the latest version using wget.
wget https://dl.influxdata.com/telegraf/releases/telegraf_1.12.2-1_armhf.deb
Install.
sudo dpkg -i telegraf_1.12.2-1_armhf.deb
At this stage, you should start the metric collection file in the background. It will start logging all the values in the temeprature.log file. I have chosen to use nohup.
nohup python3 iotutil.py &
The idea behind using Telegraf and influxdb is to make the data collection and querying seamless.
InfluxDB is a high-performance data store written specifically for time series data. It allows for high throughput ingest, compression and real-time querying. While collecting data on the go, and as we go forward, you will notice that we are querying as the data is placed into the DB.
Telegraf makes the job of cleaning and feeding continuous data in to Influxdb seamless. Iâm using a grok log parser with Telegraf to fetch meaningful data from the logs just created. Writing grok-patterns for the first time can be really tricky, please refer to this pattern matcher if you wish to create your own custom pattern apart from what is included in the code.
Telegraf is going to fetch the data from various inputs and feed it to the influxDB. You will need to define different inputs and one output. It will also create tables automatically. For all of this, we need to write a conf file, name this file iotlog.conf and paste the content below.
[agent] # Batch size of values that Telegraf sends to output plugins. metric_batch_size = 1000 # Default data collection interval for inputs. interval = "30s" # Added degree of randomness in the collection interval. collection_jitter = "5s" # Send output every 5 seconds flush_interval = "5s" # Buffer size for failed writes. metric_buffer_limit = 10000 # Run in quiet mode, i.e don't display anything on the console. quiet = true # # Ping given url(s) and return statistics [[inputs.ping]] ## NOTE: this plugin forks the ping command. You may need to set capabilities ## via setcap cap_net_raw+p /bin/ping # ## urls to ping urls = ["www.github.com","www.amazon.com","1.1.1.1"] ## number of pings to send per collection (ping -c ) count = 3 ## interval, in s, at which to ping. 0 == default (ping -i ) ping_interval = 15.0 ## per-ping timeout, in s. 0 == no timeout (ping -W ) timeout = 10.0 ## interface to send ping from (ping -I ) interface = "wlan0"
# Gather metrics about network interfaces [[inputs.net]] ## By default, telegraf gathers stats from any up interface (excluding loopback) ## Setting interfaces will tell it to gather these explicit interfaces, ## regardless of status. When specifying an interface, glob-style ## patterns are also supported. ## interfaces = ["wlan0"] ## ## On linux systems telegraf also collects protocol stats. ## Setting ignore_protocol_stats to true will skip reporting of protocol metrics. ## # ignore_protocol_stats = false ## # Read metrics about cpu usage [[inputs.cpu]] ## Whether to report per-cpu stats or not percpu = true ## Whether to report total system cpu stats or not totalcpu = true ## If true, collect raw CPU time metrics. collect_cpu_time = true ## If true, compute and report the sum of all non-idle CPU states. report_active = false
[[inputs.logparser]] ## file(s) to read: files = ["/home/pi/grafanaflux/temperature.log"] # Only send these fields to the output plugins fieldpass = ["temperature", "humidity", "timestamp", "ot", "oh", "CPU_Temp"] tagexclude = ["path"]
# Read the file from beginning on telegraf startup. from_beginning = true name_override = "room_temperature_humidity"
## For parsing logstash-style "grok" patterns:
[inputs.logparser.grok]
patterns = ["%{TEMPERATURE_HUMIDITY_PATTERN}"] custom_patterns = '''TEMPERATURE_HUMIDITY_PATTERN %{NUMBER:timestamp:ts-epoch} Temp=%{NUMBER:temperature:float} C and Humidity=%{NUMBER:humidity:float}% and CPU_Temp=%{NUMBER:CPU_Temp:float} and ot=%{NUMBER:ot:float} and oh=%{NUMBER:oh:float}''' ##custom_patterns = '''TEMPERATURE_HUMIDITY_PATTERN %{NUMBER:timestamp:ts-epoch} Temp=%{NUMBER:temperature:float} %{GREEDYDATA}=%{NUMBER:humidity:float}%{GREEDYDATA} '' timezone = "Local"
[[outputs.influxdb]] ## The full HTTP or UDP URL for your InfluxDB instance. urls = ["http://127.0.0.1:8086"] # required ## The target database for metrics (telegraf will create it if not exists). database = "temperature" # required ## Name of existing retention policy to write to. Empty string writes to ## the default retention policy. retention_policy = " ## Write consistency (clusters only), can be: "any", "one", "quorum", "all" write_consistency = "any" ## Write timeout (for the InfluxDB client), formatted as a string. ## If not provided, will default to 5s. 0s means no timeout (not recommended). timeout = "10s" # username = "telegraf" # password = "metricsmetricsmetricsmetrics" ## Set the user agent for HTTP POSTs (can be useful for log differentiation) # user_agent = "telegraf" ## Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes) # udp_payload = 512
As you can see, I have created a few inputs, cpu, net, ping, logparser as per my needs. You can choose to keep or remove it depending on your needs. For reference, logparser input is the one which is fetching data from the logs.
It should be clear to you that influx db is going to create a db and multiple tables, one for each input. The table containing the data which we are logging will be in room_temperature_humidity table, similarily cpu for cpu input.
Just to give you an idea, Iâm using cpu input to collect and display system- specific data, net for network details such as packets received, dropped, etc.
Run this in the background using the command:
telegraf --config iotlog.conf &
Install Grafana:
wget https://dl.grafana.com/oss/release/grafana_6.2.2_armhf.deb sudo dpkg -i grafana_6.2.2_armhf.deb sudo apt-get update sudo apt-get install grafana sudo service grafana-server start
This will start the grafana server and you can now access grafana on the default 3000
port. Open up a browser and go to http://raspberry_pi:3000/ and login using the default username and passwordâââadmin
, admin
. If you are opening it on raspberry pi itself, you can input: http://localhost:3000/.
Once youâve logged into Grafana, add InfluxDb as the default data source and start creating dashboards.
Add influxDb Url, which will be http://localhost:8086
if youâre running influxDb locally. Add a database name such as temperature
. Leave everything else to its default.
Setting up the dashboards in grafana is pretty straightforward. The data will be fetched through a query in which you can either write yourself or use the gUI option. i.e To receive the mean room temperature you will do something like this:
<!-- wp:paragraph --> <p><strong>Formulating a query</strong></p> <!-- /wp:paragraph -->
<!-- wp:paragraph --> <p>My personal dashboard looks like this:</p> <!-- /wp:paragraph -->
<!-- wp:image --> <figure class=" /><img src="https://cdn-images-1.medium.com/max/1200/1*zu39zFjSaaDiEcIn6sRAYA.png" alt="/><figcaption>My dashboard</figcaption></figure> <!-- /wp:image -->
<!-- wp:paragraph --> <p>If you have any problem setting up grafana, please refer to the <a rel=" />guide.
The Most Comprehensive IoT Newsletter for Enterprises
Showcasing the highest-quality content, resources, news, and insights from the world of the Internet of Things. Subscribe to remain informed and up-to-date.
New Podcast Episode
Related Articles