
Learn Edge AI on Unihiker K10
Platforms
Categories
Edge AI is becoming increasingly important as more applications demand real-time intelligence directly on embedded devices. With this in mind, the UNIHIKER K10 stands out as a powerful new single-board computer designed for education, prototyping, and AI-based projects. It comes packed with onboard peripherals such as a camera, microphone, display, speaker, buttons, and multiple sensors, making it an excellent platform for learning and experimenting with Edge AI.
However, being a newly released board, the UNIHIKER K10 presents a unique challenge. DFRobot provides a pre-built UNIHIKER K10 library that simplifies access to all the onboard peripherals. While this makes the board very easy to use for beginners, it also limits low-level hardware control. This becomes a major obstacle when trying to build custom applications, especially advanced workflows like integrating Edge Impulse with custom sensor pipelines and camera handling.
In this tutorial, we go beyond the default library approach. We not only demonstrate how to interface Edge Impulse with the UNIHIKER K10 and use its sensors for Edge AI applications, but we also reverse-engineer the board to gain deeper control over the hardware. By studying the official datasheet and hardware architecture, we implement our own custom code, enabling advanced Edge Impulse use cases that are not possible using the standard libraries alone.
By the end of this project, you will understand:
- How to run Edge AI models on the UNIHIKER K10
- How to work with its camera and sensors for Edge Impulse
- How to bypass library limitations using datasheet-based custom coding
- How to unlock the board’s full potential for real-world Edge AI applications
This guide is ideal for makers, students, and developers who want to move beyond basic examples and truly explore low-level Edge AI development on the UNIHIKER K10.
For beginner, in this tutorial, you’ll learn how to build real Edge AI models on the Unihiker K10 using Edge Impulse.
We’ll walk through:
- IMU gesture classification
- Environmental sensor regression
- Audio classification
- Image object detection
By the end, your K10 will be running multiple AI models directly on the device — no cloud processing needed!
GitHub Repo: https://github.com/MukeshSankhla/Learn-Edge-AI-on-Unihiker-K10

Supplies
- 1x Unihiker K10 Kit
- 1x SD Card
- 1x 3D Printed Case (Optional)
** Software apps and online services**
- Download the Project GitHub Repository
- Go to the project’s GitHub page: Learn-Edge-AI-on-Unihiker-K10-Edge-Impulse-Beginner-Tutorial
- Click Code → Download ZIP.
- Extract the ZIP file to your computer.
- Copy the Assets Folder to the SD Card
- Inside the extracted project folder, locate the: assets folder
- This folder contains required images
- Insert the SD card used with the Unihiker K10 into your computer.
- Copy the entire assets folder into the root directory of the SD card.
Download the Case.stl and 3D print it, or open the Unihiker Case in Fusion 360 if you want to make changes.
Step 1: Introduction to UNIHIKER K10

The UNIHIKER K10 is an all-in-one AI learning board designed for beginners, makers, and educators who want to explore Edge AI, TinyML, sensors, and IoT without any wiring or extra modules.
Key Built-In Hardware
- ESP32-S3 microcontroller (TinyML-ready)
- 2.8" Color Display (240×320)
- 2MP Camera (perfect for image classification & object detection)
- Dual MEMS Microphones + Speaker (sound classification, anomaly detection, voice features)
Sensors:
- Temperature & Humidity
- Ambient Light
- 6-Axis IMU (motion & gesture detection)
- RGB LED, Buttons, MicroSD Slot, GPIO Expansion
- Wi-Fi + Bluetooth connectivity
Everything you need for AI experiments is already on the board — no extra hardware required.
Built-In AI Features The K10 ships with ready-to-use demos like:
- Face detection
- Object recognition
- QR code detection
- Motion detection
- Offline voice commands
These give you an instant taste of what the hardware can do.
Why It’s Perfect for Edge Impulse
- Onboard sensors make data collection effortless
- Runs lightweight ML models directly on the device
In this tutorial, we’ll use all these onboard features to build real AI projects step-by-step.
Step 2: Introduction to Edge Impulse

Edge Impulse is a platform that makes it easy to build and deploy machine learning models on edge devices—like the UNIHIKER K10, without needing deep AI expertise. It’s designed for beginners, makers, and engineers who want to turn sensor data → trained ML model → real-world application in just a few steps.
Why Edge Impulse?
- No AI background needed: Build models using a visual workflow.
- Works with any sensor: IMU, microphone, camera, environment sensors and more...
- Collect data directly from the device: Record motion, sound, images, or environmental readings in real time.
- Automated ML pipeline: Feature extraction, model training, tuning—handled for you.
- Easy deployment: Export models directly as firmware, libraries, or code you can run on the K10.
Step 3: Core Concepts of Edge Impulse
To build any ML project in Edge Impulse, you’ll work through a simple workflow: Acquire Data → Build an Impulse → Train → Deploy
Here’s what each part means and how you’ll use it in this tutorial.
3.1. Data Acquisition (Collecting Data)
Machine learning always starts with good, clean data. Edge Impulse supports multiple ways to bring data into your project. In this tutorial, we’ll cover these three:
- Data Forwarder: Streams live sensor values (IMU, temperature, humidity, light, etc.) from the K10 to Edge Impulse. Great for collecting gesture, environmental, or motion data.
- CSV Wizard: Upload CSV files and automatically map columns to sensors. Useful for any data you already logged or generated separately.
- Data Upload: Drag-and-drop images, audio files, or sensor logs directly. Perfect for camera datasets or pre-recorded sound samples.
These three methods give you flexibility no matter what sensor or project you’re working on.
3.2. Creating an Impulse
An Impulse is your complete ML pipeline — the recipe that explains how data flows from input → preprocessing → model → output. An Impulse contains:
- Input block (sensor data, audio, images, etc.)
- Processing block (feature extraction)
- Learning block (the model you train)
- Output block (classification, prediction, score, detection)
3.3. Understanding the Blocks
- Time-Series Blocks: For IMU, environmental sensors, or audio. Generates features like frequency, energy, motion patterns.
- Spectrogram / MFCC: For audio: Converts sound into frequency images that the model can learn from.
- Image Processing: For camera images: Resizes, normalizes, and prepares data for object detection or classification.
- Neural Network / Regression / Anomaly Blocks: Models that learn patterns and make predictions.
3.4. Types of Models in Edge Impulse
- Classification: Identifies which category something belongs to. (Gestures, Sound types, Simple image classes)
- Regression: Predicts a continuous value. (Temperature/light prediction, Sensor trend estimation)
- Anomaly Detection: Finds "something unusual" compared to normal behavior. (Identifying abnormal movements, Monitoring environmental changes)
- Object Detection: Locates and identifies objects in images (bounding boxes). (Counting objects, Detecting tools or items)
3.5. Training the Model
Once your impulse is set, you train your model with your collected data:
- Choose training/test split
- Tune hyperparameters (automatically or manually)
- Check accuracy and performance metrics
- Visualize loss curves and confusion matrix
3.6. Deploying the Model
After training, Edge Impulse lets you deploy in multiple ways (Firmware, Library, WebAssembly, etc.). For this tutorial, you’ll deploy models as an Arduino Library for the Unihiker K10.
Step 4: Motion Data Classification

From Step 4 to Step 10, we will train and deploy a Motion Classification model on the Unihiker K10 using Edge Impulse.
- To begin, go to edgeimpulse.com.
- If you are a new user, create an account and log in.
- Once logged in, you will see your Project Dashboard.
- Click Create new project, enter a project name, select Public or Private, and click Create.
Step 5: Prepare for Data Acquisition

In this step, we will collect 6-axis IMU data from the Unihiker K10 into the Edge Impulse project using the Data Forwarder method.
5.1 Flash the Data-Streaming Arduino Code
- Download the GitHub repo for this project: Learn-Edge-AI-on-Unihiker-K10-Edge-Impulse-Beginner-Tutorial
- Open the
Motion_Data_Collectsketch in the Arduino IDE. Before uploading the code, ensure that the Unihiker K10 board manager is installed.

5.2 Install the Unihiker K10 Board in Arduino IDE
- Open
File → Preferences. - In Additional Board Manager URLs, paste:
https://downloadcd.dfrobot.com.cn/UNIHIKER/package_unihiker_index.json- Click OK.
- Go to
Tools → Board → Board Manager. - Search for Unihiker and install it.

5.3 Connect and Upload the Code
- Connect the Unihiker K10 to your PC via USB.
- Go to Tools:
- Select Board: Unihiker K10
- Select the correct COM port
- Enable USB CDN on Boot
- Upload the sketch. Once the upload is successful, the K10 is ready to stream IMU data over USB.

5.4 Install Edge Impulse CLI
To send data from the device to Edge Impulse, install the CLI on your PC: Edge Impulse CLI Installation Guide
After installation, open a new Command Prompt window and run:
edge-impulse-data-forwarderThe CLI will ask you to log in using your Edge Impulse email and password.

5.5 Select Your Device and Project
- Select the COM port used by the Unihiker.
- Select the project you created for this tutorial.

5.6 Naming the Sensor Parameters
The CLI will ask for CSV column names. For IMU data, name them: x, y, z
Next, enter a device name.

5.7 Start Collecting Data
Do not close the Command Prompt window. Now return to your browser.
Step 6: Data Collection in Edge Impulse

- Open your Edge Impulse project and go to Data Acquisition.
- You should see your device listed under “Connected Devices.”
- Set the Label (e.g.,
idle,shake,left_right,up_down) and Sample Length. - Click Start Sampling.
Gestures to Record:
- Idle (keep the device still)
- Shake
- Left-Right (LR) motion
- Up-Down (UD) motion

- Record 3 samples × 10 seconds each → 30 seconds of data per gesture.
- After collecting, click the three dots next to each 10-second sample and choose Split sample into 1-second windows.
- Move a few samples from Training to Testing (80:20 split).


Step 7: Create Impulse

- Go to Create Impulse.
- Add blocks:
- Processing Block: Spectral Analysis (captures motion frequency patterns).
- Learning Block: Classification.
- Click Save Impulse.

Step 8: Generate Features

- Go to Spectral Features.
- Keep default parameters and click Save parameters.
- Switch to Generate Features and click Generate features.
- You’ll see a Feature Explorer plot showing clusters for each gesture.

Step 9: Train the Model

- Go to the Classifier panel.
- Keep default settings.
- Click Save & Train.
- Review Accuracy, Loss, Confusion Matrix, and Performance Metrics.

Step 10: Deploying the Model on Unihiker K10

- Export the Model as Arduino Library:
- Go to Deployment.
- Choose Arduino Library.
- Click Build. Download the .zip file.

- Install the Arduino Library:
- In Arduino IDE:
Sketch → Include Library → Add .ZIP Library…
- In Arduino IDE:

- Import the Example Code:
- Open
Motion_Data_Classification.inofrom the GitHub repo.
- Open

- Fix the Library Name:
- Replace
#include <your_ei_library_name_here.h>with the actual name from the installed library example.
- Replace

- Upload to Unihiker K10:
- Select Board: Unihiker K10, correct COM port.
- Click Upload.

The code maps gestures to RGB Colors and Emojis:
- Idle → Green + Normal face
- UD → Pink + Happy emoji
- Shake → Red + Spiral eyes
- Left/Right → Blue + Sleepy emoji

Step 11: Environmental Sensor Regression
Steps 11-14 cover training and deploying an Environmental Sensor Regression model to predict a "Comfort Score".
11.1. Create a New Edge Impulse Project
Start a new project for regression.

11.2. Generating a Dataset
Use the dataset generator web app: Dataset Generator
- Configure Min/Max Temp & Humidity.
- Generate and download CSV.

11.3. Import Data Using CSV Wizard
- In Edge Impulse, go to Data Acquisition → CSV Wizard.
- specific "No time-series data", Label: "Comfort score", Features: "temperature", "humidity".

11.4. Upload the Data
Upload the CSV data.

Step 12: Create Impulse & Generate Features
12.1. Create the Impulse
- Processing Block: Raw Data.
- Learning Block: Regression.
- Click Save impulse.

12.2. Generate Features
- Go to Raw data → Save parameters → Generate features.

Step 13: Train the Regression Model & Understand the Results
Now that all features are generated, the next step is to train the regression model that predicts the Comfort Score based on Temperature and Humidity.
1. Train the Model

- Go to the Regression panel in Edge Impulse.
- Leave all settings at their default values.
- Click Save & Train.

Edge Impulse will now train a lightweight neural network regression model. Once training is complete, you will see the model performance results.
2. Understanding the Training Results

After training, Edge Impulse provides two important visual reports:
Model Metrics (Validation Set)
These metrics indicate how accurate the model is:
Loss (MSE): 0.42
Low mean squared error → very good accuracy
Mean Absolute Error (MAE): 0.51
Predictions are off by only ±0.5 on average
Explained Variance Score: 0.93
Model explains 93% of the variation → excellent
Interpretation
- A 0.42 MSE and 0.93 variance score indicate that the model fits the data very well.
- The small MAE means comfort scores are predicted very close to the real values.
- Overall, this is a stable and highly accurate regression model ready to be deployed.
Data Explorer (Prediction Visualization)
The scatter plot shows how well the model predicts values across the entire dataset.
- Green dots – Predictions that are correct (within the acceptable error threshold).
- Red dots – Predictions that are incorrect (outside the error threshold).
From the visualization, we can observe:
- Most data points are green, meaning the model learned the relationship between Temperature, Humidity, and Comfort Score very well.
- Only a few points are red, usually in extreme environmental ranges.
- This indicates a high-quality regression model suitable for real-time applications.
Edge Impulse also lists the on-device performance:
- Inferencing time: ~2 ms
- Peak RAM usage: ~1.2 KB
- Flash usage: ~10.2 KB
This confirms the model is extremely lightweight and ideal for deployment on Unihiker K10.
3. Export the Model
Now go to the Deployment section:
- Select Arduino Library
- Click Build to generate the library
- Edge Impulse will automatically download the .zip file
This library will be used in the next step to run comfort score predictions directly on the Unihiker K10.
Step 14: Upload & Run the Regression Model on Unihiker K10
Now let’s deploy the Comfort Score Regression model on the Unihiker K10.
14.1. Open the Arduino Example Code
- From the downloaded GitHub repository, open the
- Environmental_Sensor_Regression sketch in Arduino IDE.

14.2. Install Required Libraries
a. Install the Edge Impulse Regression Model Library
- Go to Sketch → Include Library → Add .ZIP Library
- Select the Arduino library you downloaded from Edge Impulse.
- Once installed, the library will appear under File → Examples.
Important: Update the Edge Impulse library name in the code to match the exact name of your downloaded library folder.
b. Install the AHT20 Sensor Library
- Open Library Manager (Tools → Manage Libraries)
- Search for AHT20
- Install the official AHT20 temperature & humidity sensor library.

14.3. Upload the Code to Unihiker K10
- Select the correct Board (Unihiker K10)
- Select the correct COM port
- Click Upload

The first compilation may take some time—be patient.
14.4. View the Results on the Screen
Once uploaded successfully, the K10 display will show:
- Live Temperature readings
- Live Humidity readings
- Predicted Comfort Score (from the regression model)
The values update in real time, demonstrating on-device machine learning running directly on the Unihiker K10.
Step 15: Audio Classification – Data Collection
In Step 15 to Step 17, we will collect audio data using the Unihiker K10, train an audio classification model, and deploy it on the device.
1. Create a New Edge Impulse Project
- Go to edgeimpulse.com
- Create a new project for audio classification.

2. Upload Audio Data Collection Firmware
- Open the Audio_Data_Collect Arduino sketch from the GitHub repository.
- In the code:
- Set the number of labels
- Define the label names (classes)

// ---------------- Settings -----------------
String labels[] = {"Noise", "Gun", "Chainsaw"};
int labelCount = 3;
int currentLabelIndex = 0;- Upload the code to the Unihiker K10.
3. Audio Recording Interface on K10
Once uploaded, the K10 screen will display:
- Current label name
- Sampling time (in seconds)
- Name of the last recorded audio file
Each audio file is saved as: LabelName_RandomNumber.wav
4. Button Controls

The onboard buttons control recording and navigation:
Button A
- Click → Start audio recording
- Long press → Go to previous label
Button B
- Click → Increase recording time by 1 second
- Double click → Reset recording time to default
- Long press → Go to next label
5. Collect Audio Samples
For this demo, record the following classes:
- Noise (background sound)
- Gunshot
- Chainsaw
Data Collection Plan
- Each recording: 10 seconds
- Total data per class: ~1 minute
This provides enough variation for training a reliable audio classification model.
Step 16: Audio Data Processing**

Once the audio samples are recorded and saved on the SD card, the next step is to import and prepare the data in Edge Impulse.
1. Upload Audio Data

- Open your Edge Impulse project
- Go to Data Acquisition → Upload data
- Select all audio files belonging to one label
- Set the Label name correctly
- Upload the files
Repeat this for each audio class.
2. Split Audio Samples

- After uploading, each audio file will be 10 seconds long
- For better training, split each sample into 1-second segments (Both in training and test)
- This increases the number of training samples and improves accuracy
3. Crop Important Audio Features

- While splitting, make sure to crop around meaningful audio spikes
- Remove silent or irrelevant portions of the audio
- Focus on sections where the sound pattern is clearly visible

Step 17: Training the Audio Classification Model
Once your audio dataset is clean and properly split, it’s time to train and deploy the model.
1. Create the Impulse

- Go to Create Impulse
- Set:
- Processing block
- MFE → for non-voice sounds (noise, machines, events)
- MFCC → for voice or speech-based data
- Learning block → Classification
- Leave all other settings as default
- Click Save Impulse
2. Generate Features

- Open the MFE / MFCC panel
- Click Save parameters
- Click Generate features (use default settings)
This step extracts meaningful frequency features from the audio samples.
3. Train the Model

- Go to the Classification panel
- Keep all parameters at their default values
- Click Save & Train
4. Review Model Results

Once training is complete, Edge Impulse will display:
- Overall model accuracy
- Confusion matrix showing how well each class is recognized
- Per-class performance metrics
If the accuracy is low, consider collecting more data or improving audio cropping.
Model Results Explanation
From the results shown above, we can see that the audio classification model is performing extremely well:
Accuracy: 100%
- The model correctly classified all validation samples.
Confusion Matrix
- Chainsaw, Gun, and Noise are all classified with 100% confidence
- No overlap or misclassification between classes
- This means each sound has a very distinct audio pattern.
Evaluation Metrics
- Precision, Recall, and F1 Score are all 1.0
- Area under ROC Curve is 1.0, indicating perfect separability
- Data Explorer
- Each class forms a clearly separated cluster
- This shows the MFE/MFCC features successfully captured unique sound characteristics On-device Performance
- Inference Time: 7 ms
- RAM Usage: 14.7 KB
- Flash Usage: 41 KB
This confirms the model is accurate, lightweight, and perfectly suited for real-time edge deployment on the Unihiker K10.
Step 18: Image Object Detection – Data Collection
In Step 18 to Step 20, we will collect image data, train an image object detection model, and deploy it on the Unihiker K10.
1. Create a New Project

Go to Edge Impulse Create a new project for image object detection
2. Upload Image Data Collection Firmware
- Open the Image_Data_Collect Arduino sketch from the GitHub repository
- In the code:
- Set the number of labels
- Define the label names
/* =========================================================
Labels Configuration
========================================================= */
String labels[] = {"BACKGROUND","IN", "CN", "USA", "HK"};
int labelCount = 5;
int currentLabelIndex = 0;- Upload the code to the Unihiker K10
3. Image Capture Controls
Once the code is uploaded, use the buttons to capture images:
- Button A → Capture image
- Button B → Next label
- Long press Button A → Previous label

Each captured image is saved as:
<LabelName>_<RandomNumber>.jpeg4. Collect Image Dataset
For this demo, images were collected for 4 different coins:
- Indian Coin
- Chinese Coin
- Hong Kong Coin
- USA Coin
5. Background Images
Background-only images (without objects) help the model learn what not to detect These improve detection accuracy and reduce false positives
Data Collection Tips
- Capture 15–16 images per class
- Use different orientations, angles, and lighting
- Ensure the object is clearly visible in each frame
6. Upload Images
- Copy images from the SD card
- Upload all images to Data Acquisition in Edge Impulse
- Upload without assigning labels (labels will be added during bounding box annotation)
Step 19: Box Annotation

After uploading all images, the next step is to label the objects using bounding box annotation.
1. Annotate Images

- Open Data Acquisition
- Click on each uploaded image
- Draw a bounding box around the object (coin)
- Assign the correct label name
Make sure the box tightly covers only the object.
2. Annotate All Samples
- Repeat this process for all images
- Annotate both training and test datasets
Once all images are annotated, your dataset is ready for impulse creation and model training in the next step
Step 20: Training the Image Object Detection Model
Now that all images are annotated, we can create and train the object detection model.
1. Create the Impulse

- Go to Create Impulse
- Set:
- Processing block → Image
- Learning block → Object Detection
- Leave all other settings as default
- Click Save Impulse
2. Configure Image Processing

- Open the Image block
- Select Grayscale (reduces memory usage and improves edge performance)
- Click Save parameters
- Click Generate features
3. Train the Object Detection Model

Go to the Object Detection panel Change the following parameters:
- Training cycles: 60
- Learning rate: 0.01
Leave other settings unchanged Click Save & Train
4. Model Results Explanation

From the results shown above:
F1 Score: 96%
Indicates strong balance between precision and recall for object detection.
Confusion Matrix
Coin from India, China, Hong Kong, and USA are detected correctly
Background samples help reduce false detections
Metrics
Precision (non-background): 1.00 → Very few false positives
Recall (non-background): 0.92 → Most objects are detected successfully
F1 Score: 0.96 → Excellent real-world performance
On-device Performance
Inference Time: ~1.1 seconds
RAM Usage: 119 KB
Flash Usage: 81 KB
This confirms the model is accurate and optimized for edge object detection on the Unihiker K10.
Step 21: UNIHIKER K10 – Detailed Pin Mapping, I²C Addresses & Control Lines

This section describes how each major peripheral on the UNIHIKER K10 is electrically connected to the ESP32-S3, including exact GPIO pins, I²C addresses, and signal roles, as observed from the schematic and validated by practical code mapping.
1. Core MCU – ESP32-S3-WROOM-1
The ESP32-S3 acts as the central controller and directly interfaces with camera, audio (I2S), SPI peripherals, USB, and I²C devices.
Step 22: Camera Interface (GC2145 – Parallel DVP)

Camera Sensor: GC2145 Interface Type: 8-bit parallel DVP + I²C control Camera Data & Sync Pins Camera Control
Step 23: Display Interface (ILI9341 – SPI)

Display Controller: ILI9341 Interface: SPI (no MISO) SPI alone is insufficient - display initialization requires XL9535 configuration first.
Step 24: SD Card (SPI Mode)

Interface: SPI (shared bus)
Step 25: Audio Subsystem (I2S)

I2S Signal Mapping
Microphone ADC – ES7243E
Function: Digital microphone ADC Interface: I2S + I²C
Speaker Amplifier – NS4168
Step 26: I²C Bus Devices (Shared Bus: GPIO47 / GPIO48)

Step 27: GPIO Expander – XL9535 (Critical Component)

IC: XL9535QF24 **Address:**0x20 Functions Controlled via XL9535
Step 28: Buttons – Button a & Button B

Buttons are NOT connected directly to ESP32 GPIOs.
Conclusion
In this tutorial, you learned how to build end-to-end Edge AI applications on the Unihiker K10 using Edge Impulse.
We covered:
- IMU-based gesture classification -Motion_Data_Classification
- Environmental sensor regression for comfort prediction
- Audio classification using the onboard microphone - Audio_Data_Classification
- Image object detection using the camera - Image_Data_Object Detection
You explored the complete workflow — from data collection and preprocessing to model training, optimization, and deployment as an Arduino library running directly on the device with real-time inference and on-screen visualization.
This demonstrates how powerful and accessible Edge AI has become — enabling anyone to turn sensor data into intelligent, responsive applications without cloud dependency.
What’s next?
- Add more sensors or classes
- Improve models with more data
- Combine multiple models into a single smart system
- Build real-world applications like smart assistants, safety systems, or interactive devices
Thanks for following along, and happy building with Edge Impulse on Unihiker K10!
Discussion
No comments yet
Be the first to share your feedback or ask a question about this project.