Reimagining the chicken coop with predator detection, Wi-Fi control, and more

The traditional backyard chicken coop is a very simple structure that typically consists of a nesting area, an egg-retrieval panel, and a way to provide food and water as needed. Realizing that some aspects of raising chickens are too labor-intensive, the Coders Cafe crew decided to automate most of the daily care process by bringing […]

The post Reimagining the chicken coop with predator detection, Wi-Fi control, and more appeared first on Arduino Blog.

The traditional backyard chicken coop is a very simple structure that typically consists of a nesting area, an egg-retrieval panel, and a way to provide food and water as needed. Realizing that some aspects of raising chickens are too labor-intensive, the Coders Cafe crew decided to automate most of the daily care process by bringing some IoT smarts to the traditional hen house.

Controlled and actuated by an Arduino UNO R4 WiFi and a stepper motor, respectively, the front door of the coop relies on a rack-and-pinion mechanism to quickly open or close at the scheduled times. After the chickens have entered the coop to rest or lay eggs, they can be fed using a pair of fully-automatic dispensers. Each one is a hopper with a screw at the bottom which pulls in the food with the help of gravity and gently distributes it onto the ground. And similar to the door, feeding chickens can be scheduled in advance through the team’s custom app and the UNO R4’s integrated Wi-Fi chipset.

The last and most advanced feature is the AI predator detection system. Thanks to a DFRobot HuskeyLens vision module and its built-in training process, images of predatory animals can be captured and leveraged to train the HuskyLens for when to generate an alert. Once an animal has been detected, it tells the UNO R4 over I2C, which in turn, sends an SMS message via Twilio.

More details about the project can be found in Coders Cafe’s Instructables writeup.

The post Reimagining the chicken coop with predator detection, Wi-Fi control, and more appeared first on Arduino Blog.

Making fire detection more accurate with ML sensor fusion

The mere presence of a flame in a controlled environment, such as a candle, is perfectly acceptable, but when tasked with determining if there is cause for alarm solely using vision data, embedded ML models can struggle with false positives. Solomon Githu’s project aims to lower the rate of incorrect detections with a multi-input sensor fusion technique […]

The post Making fire detection more accurate with ML sensor fusion appeared first on Arduino Blog.

The mere presence of a flame in a controlled environment, such as a candle, is perfectly acceptable, but when tasked with determining if there is cause for alarm solely using vision data, embedded ML models can struggle with false positives. Solomon Githu’s project aims to lower the rate of incorrect detections with a multi-input sensor fusion technique wherein image and temperature data points are used by a model to alert if there’s a potentially dangerous blaze.

Gathering both kinds of data is the Arduino TinyML Kit’s Nano 33 BLE Sense. Using the kit, Githu could capture a wide variety of images thanks to the OV7675 camera module and temperature information with the Nano 33 BLE Sense’s onboard HTS221 sensor. After exporting a large dataset of fire/fire-less samples alongside a range of ambient temperatures, he leveraged Google Colab to train the model before importing it into the Edge Impulse Studio. In here, the model’s memory footprint was further reduced to fit onto the Nano 33 BLE Sense.

The inferencing sketch polls the camera for a new frame, and once it has been resized, its frame data, along with a new sample from the temperature sensor, are merged and sent through the model which outputs either “fire” or “safe_environment”. As detailed in Githu’s project post, the system accurately classified several scenarios in which a flame combined with elevated temperatures resulted in a positive detection.

The post Making fire detection more accurate with ML sensor fusion appeared first on Arduino Blog.

Making a car more secure with the Arduino Nicla Vision

Shortly after attending a recent tinyML workshop in Sao Paolo, Brazil, Joao Vitor Freitas da Costa was looking for a way to incorporate some of the technologies and techniques he learned into a useful project. Given that he lives in an area which experiences elevated levels of pickpocketing and automotive theft, he turned his attention to […]

The post Making a car more secure with the Arduino Nicla Vision appeared first on Arduino Blog.

Shortly after attending a recent tinyML workshop in Sao Paolo, Brazil, Joao Vitor Freitas da Costa was looking for a way to incorporate some of the technologies and techniques he learned into a useful project. Given that he lives in an area which experiences elevated levels of pickpocketing and automotive theft, he turned his attention to a smart car security system.

His solution to a potential break-in or theft of keys revolves around the incorporation of an Arduino Nicla Vision board running a facial recognition model that only allows the vehicle to start if the owner is sitting in the driver’s seat. The beginning of the image detection/processing loop involves grabbing the next image from the board’s camera and sending it to a classification model where it receives one of three labels: none, unknown, or Joao, the driver. Once the driver has been detected for 10 consecutive seconds, the Nicla Vision activates a relay in order to complete the car’s 12V battery circuit, at which point the vehicle can be started normally with the ignition.

Through this project, da Costa was able to explore a practical application of vision models at-the-edge to make his friend’s car safer to use. To see how it works in more detail, you can check out the video below and delve into the tinyML workshop he attended here.

The post Making a car more secure with the Arduino Nicla Vision appeared first on Arduino Blog.