Iot Unit6
Iot Unit6
To prepare Raspberry for the execution of the .NET code, you need to install Mono, which
contains the Common Language Runtime for .NET that will help you run the .NET code on
Raspberry. This can be done by executing the following commands in a terminal window in
Raspberry Pi:
[bash] $ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install mono-complete
[/bash]
Your device is now ready to run the .NET code.
Hardware: Sensor used in Raspberry Pi IoT project
The sensor prototype will measure three things: light, temperature, and motion. To
summarize, here is a brief description of the components:
The light sensor is a simple ZX-LDR analog sensor that will connect to a four-channel
analog-to-digital converter (Digilent Pmod AD2). This is then connected to an I2C bus
that will connect to the standard GPIO pins for I2C. Note that The I2C bus permits
communication with multiple circuits using synchronous communication, employing
a Serial Clock Line (SCL) and Serial Data Line (SDA) pin. This is a common way to
communicate with integrated circuits.
The temperature sensor (Texas Instruments TMP102) connects directly to the same
I2C bus.
The SCL and SDA pins on the I2C bus use recommended pull-up resistors to ensure
they are in a high state when nobody actively pulls them down.
The infrared motion detector (Parallax PIR sensor) is a digital input that can be
connected to GPIO 22.
Four LEDs will also be added to the board. One of these is green and is connected to
GPIO 23. This will show when the application is running. The second one is yellow
and is connected to GPIO 24. This will show when measurements are done. The third
one is yellow and is connected to GPIO 18. This will show when an HTTP activity is
performed. The last one is red and is connected to GPIO 25. This will show when a
communication error occurs.
The pins that control the LEDs are first connected to 160 Ω resistors before they are
connected to the LEDs, and then to ground. All the hardware of the prototype board is
powered by the 3.3 V source provided by Raspberry Pi. A 160 Ω resistor connected in
series between the pin and ground ensures that the LED emits a bright light.
In the learn how to create a Raspberry Pi sensor article, we demonstrated how to develop a Pi
that is able to sense light, motion and temperature. In the same article, we also discussed how
to use C# code to interact with hardware components and capture values sensed. This article
will pick up from where the previous article left. The article will focus on demonstrating how
to persist captured values in a database, exporting the data, creating an actuator and a
controller.
Functionality to support data persistence is available in the Data Clayster library. Data
persistence happens via an object database that evaluates the objects that you have designed
and creates a database schema that is able to hold the defined objects. The entry point to data
persistence is referencing an object as shown below.
internal static ObjectDatabase myDb;
After referencing an object database, information that will be used to connect to the database
needs to be provided. One approach used in passing the connection information is adding the
connection parameters in a .config file from where the application can read it. This tutorial
will demonstrate persisting data to a SQLite database. The parameters that will enable
connection to the database are shown below
DB.BackupConnectionString = "Data Source=sensor.db;Version=3;";
DB.BackupProviderName = "Clayster.Library.Data.Providers."
+"SQLiteServer.SQLiteServerProvider";
To perform data manipulation activities such as storing, updating, searching and deleting a
database, a proxy object is used as shown below. By using a database object we ensure data is
not lost when our Raspberry Pi is powered off.
myDb = DB.GetDatabaseProxy ("mySensor");
For our data to be useful we need to go beyond persisting it in a database and be able to
export it for consumption by other applications. To fulfill this requirement, the IoT library
provides an XML based sensor format. The format orders data by timestamp and for each
timestamp it is possible to have string, Boolean, date or enumeration fields. Each field has
optional and required metadata associated with it. Required metadata include name and a
value of the correct field type. Optional metadata that can be associated with a field include
the quality of service and a readout type.
The process of exporting data begins with method call of Start () and ends with a method call
of End (). The methods called to initiate and terminate the export process are available in the
data export class. The data process export is simplified by calling an intermediate function to
retrieve the data from a record object and temporarily store them in another record as an
array. The C# code used to retrieve the sensed data is shown below
private static void ExportSensorData (ISensorDataExport Output,
ReadoutRequest Request)
{
Output.Start ();
lock (synchObject)
{
Output.StartNode ("Sensor");
Export (Output, new Record[]
{
new Record (DateTime.Now, temperatureC, lightPercent,
motionDetected)
},ReadoutType.MomentaryValues, Request);
Export (Output, everySec, ReadoutType.HistoricalValuesSecond,
Request);
Export (Output, everyMin, ReadoutType.HistoricalValuesMinute,
Request);
Export (Output, everyHr, ReadoutType.HistoricalValuesHour,
Request);
Export (Output, everyDy, ReadoutType.HistoricalValuesDay,
Request);
Export (Output, everyMon, ReadoutType.HistoricalValuesMonth,
Request);
Output.EndNode ();
}
Output.End ();
}
Before any data export can happen, there are several checks done on the data. The readout
type, the time interval and fields are checked to ensure they conform to what the client
requested. Fields that meet the criteria set by the client are exported. The C# code used to
check and export fields is shown below.
private static void Export(ISensorDataExport Output,
IEnumerable<Record> History, ReadoutType Type,
ReadoutRequest Request)
{
if((Request.Types & Type) != 0)
{
foreach(Record Rec in History)
{
if(!Request.ReportTimestamp (Rec.Timestamp))
continue;
Output.StartTimestamp(Rec.Timestamp);
if (Request.ReportField("Temp"))
Output.ExportField("Temperature",Rec.TemperatureC,
1,"C", Type);
if(Request.ReportField("Light"))
Output.ExportField("Light",Rec.LightPercent, 1, "%",
Type);
if(Request.ReportField ("Motion"))
Output.ExportField("Motion",Rec.Motion, Type);
Output.EndTimestamp();
}
}
}
In an IoT project, a sensor is used to capture the state of an environment while an actuator
utilizes the sensed state to interact with the environment. Because this article is a continuation
of the ‘learn how to create a sensor project’, the hardware mentioned here is an addition of
previously used hardware. Hardware needed for the actuator includes an alarm and digital
outputs, which are connected through GPIO pins. The DigitalOutput class provides a
mechanism to interface with digital outputs. To interface with the alarm the SoftwarePwm
class will be used. The code used to interact with the outputs is shown below.
private static DigitalOutput executionLed =
new DigitalOutput (8, true);
private static SoftwarePwm alarmOutput = null;
private static Thread alarmThread = null;
private static DigitalOutput[] digitalOutputs =
new DigitalOutput[]
{
new DigitalOutput (19, false),
new DigitalOutput (24, false),
new DigitalOutput (27, false),
new DigitalOutput (22, false),// pin 21 on RaspberryPi R1
new DigitalOutput (20, false),
new DigitalOutput (15, false),
new DigitalOutput (14, false),
new DigitalOutput (12, false)
};
A controller is the intelligent link between the sensor and the actuator. The controller
processes the data captured by the sensor and communicates its output through the actuator.
If the project we have developed is deployed in a home security environment, then an alarm
would sound if there is a combination of darkness and movement. Before the controller can
do any processing, it needs to acquire the sensed data. The variables shown below will be
used to hold the sensed data:
private static bool movement = false;
private static double lightDensity = 0;
private static bool hasValues = false;
Earlier in the article, we demonstrated how to export data in an XML based format. At this
stage we need to process the data to detect differences in current and previous data. This
processing is implemented using the code shown below.
private static bool UpdateFields(XmlDocument Xml)
{
FieldBoolean Boolean;
FieldNumeric Numeric;
bool Updated = false;
foreach (Field F in Import.Parse(Xml))
{
if(F.FieldName == "Motion" &&
(Boolean = F as FieldBoolean) != null)
{
if(!hasValues || motion != Boolean.Value)
{
motion = Boolean.Value;
Updated = true;
}
} else if(F.FieldName == "Light" &&
(Numeric = F as FieldNumeric) != null &&
Numeric.Unit == "%")
{
if(!hasValues || lightPercent != Numeric.Value)
{
lightPercent = Numeric.Value;
Updated = true;
}
}
}
return Updated;
}
The following projects are based on actuators. This list shows the latest innovative projects
which can be built by students to develop hands-on experience in areas related to/ using
actuators.
2. 2 Mechatronics Projects
This course introduces you to the concept of industrial robotics through two different innovative
mechatronics projects. The first mechatronics project that you will build is the robotic arm, which
has 3 degrees of freedom and can be controlled by your smart phone using an Android App. In the
second project, you will learn to build a two legged robot or a biped walking robot that will have six
degrees of freedom with a hip, knee & foot and mimic the walking action of humans.
3. Robotic Arm
In this project, you will build a robotic arm that has 3 degrees of freedom which you can control by
your mobile phone. The robotic arm will be connected to the mobile phone through Bluetooth and
can be controlled by an Android App.
In this project-based course, you will learn to develop a computer vision-based text scanner that can
scan any text from an image using the optical character recognition algorithm and display the text on
your screen.
In this project, you will learn to build a Wi-Fi Controlled Robot that can be operated remotely via a
computer/ website using Wi-Fi. The robot is connected to the internet with the help of ESP 8266 Wi-
Fi module and can be controlled through commands on a web page that you will design.
If you are wondering why Tesla didn't use standardized Linear Actuators like the FIRGELLI
actuator, its because they have several constraints that means they have to develop their own
systems to get the Robots to be ultimately lightweight, power efficient, high power density
and low cost. Tesla have claimed they want to get the Bot to retail for $20,000 each. This in
itself is a tall order for something that's gong to require 23 Actuators, and powerful PC, lots
of sensors and a battery pack to make it last more than a few hours, plus a strong skeleton to
hold it all together.
Tesla Bot Linear Actuators
The Linear Actuators Tesla developed are highly specific for a specific role, this means they
would not really be of much use for any other application other than a Robot. Their Actuators
employ a planetary Roller system and Tesla calls it, but this is basically code for Ballscrew
leadscrew design, and instead of a traditional magnetic armature coil in the middle of the
motor they decided to use a brushless core motor design. This means the Ball leadscrew
design is very efficient and uses less power, but also more expensive. And they use a
Brushless power system which means the live span will be significantly faster and allows
highly specific drive modes controlled by the software.
The length of travel is only about 2" long, and as the picture showed of them lifting a Piano at
500KG, this is alot of weight. You may wonder why it needs to lift so much weight?, well
that's because when installed in a metal skeleton, the actuators travel needs to amplify the
stoke of what its moving. So if its moving the Leg of a Robot, the leg needs to be able to
move about 150 degs, or over a 2 foot length the leg needs to swing from about zero to a 3-
foot arc. The huma body that has evolved over 100,000's of years allows us humans to do this
using our leg muscles, but getting a linear actuator to do this is no easy task. So the point I'm
making is that, even though The Actuator can lift 500Kg of weight over 2-inches, once that
actuators connected to a lever, the force gets reduced significantly, depending on the leverage
ratio, and but the speed increases which makes for a nice trade-off.
Raspberry Pi™ (a clever little, low-cost computer that plugs into any monitor or TV)
An old webcam
A Joby™ tripod
A nut (a standard 1/4” - 20 nut available from most hardware stores)
1 SD card
A single-use pack of Sugru
Step 1 — Mount the tripod nut to the camera
Use just 1/5 of the Sugru pack and roll into a ball. Press firmly onto the base of the camera
and mould into a cone shape
Put the nut onto your tripod (keep it nice and loose) then press slowly and firmly into the
Sugru
Press the Sugru in and rub smooth
Gently unscrew the tripod
Step 2 — Mount the camera to the Raspberry Pi
Roll the remaining Sugru. Press it onto the back of the camera, making sure the cable is
twisted in the best direction
Mould into a cone shape. Press the camera slowly and firmly onto the Raspberry Pi
Press the Sugru in and rub gently where you can reach it
Leave for 12-24 hours for the Sugru to set
We have made the camera setup really plug and play for you by creating a Raspberry Pi
'image'. This is essentially a snapshot of a whole operating system, along with its files,
settings and installed programs that can perfectly capture images from your webcam and
serve them up to your phone, tablet or computer. You need to download this image file and
clone it into an SD card. Since the file is a snapshot of an SD card, it’s quite large, measuring
up at 16GB. Make sure you have space for it! You can delete it afterwards. Download it here.
Now, once you’ve downloaded the Raspberry Pi image, you need to clone it into an SD card.
This operation is wildly different and depends on the operating system your computer is
running. This is probably going to be the trickiest part of the operation, but don’t be scared!
The wonderful folks at the Raspberry Pi Organisation have made a great easy-to-follow guide
that shows you how to take your image file and burn it onto an SD card. Head on over here to
master image burning! Look at the section titled 'Writing an image to the SD card'.
Step 3 — Add in your WiFi details
Once you’ve written the image to your SD card, you should see two SD cards mounted on
your computer - 'boot' and 'recovery'. Go ahead and open boot. Right at the end of the file list,
you’ll find a file called 'wifi.conf'. Open that up in a text editor (TextEdit if you’re on a Mac,
or Notepad if you’re on Windows). Here’s where you add in your WiFi details so that the Pi
knows how to connect to your home network. Swap out 'YourWiFiNameHere' with whatever
your WiFi network name is, and 'YourWiFiPasswordHere' with your, you guessed it, WiFi
password! Save and close the file, and eject both SD cards from your system. You can now
remove it from your computer.
Step 4 - Connect it all up
Congratulations! You’re through to the last and most fun part. Put the SD card into the
Raspberry Pi. Connect up your webcam to one of the USB ports on the Pi. Make sure that
everything’s nicely mounted and ready to go.
Serial Port setup in Raspberry Pi OS
Configure the serial port on Raspberry Pi
The Raspberry Pi contains a UART serial port on the GPIO header on pins 8, TXD (GPIO
14) and 10, RXD (GPIO 15). The UART port can be used to connect to a wide range of
devices either directly using 3.3V UART pins or with a serial interface to convert the UART
voltages to an industry standard such as RS232 or RS485. When connecting a device to the
UART serial port always ensure your device operates at correct voltage, 3.3V. Connecting a
5V device to the UART port can damage the Raspberry Pi GPIO header.
https://youtu.be/w_z0BUkzbIg