Embedded Android SBC Blog Embedded systems, Android SBCs, displays & daily engineering notes

27

etvrtak

studeni

2025

How LCD Screens Work: A Clear Guide to Modern Display Technology


From phones and laptops to appliances and automotive dashboards, LCD screens are everywhere. Although we interact with them every day,
few people stop to consider how these displays produce sharp images, accurate colors, and bright visuals.
This article offers a clear explanation of how LCD screens work, covering their core components, the role of liquid crystals,
the importance of backlighting, how pixels form, and the display types used in different applications.



Understanding LCD

1. Core Components of an LCD Screen




An LCD screen is built from several carefully engineered layers that work together to control how light travels through the display.
The three essential parts are:




  • Backlight The light source behind the display.


    For designers comparing different brightness levels for indoor applications, a reference list of normal-brightness LCD display options can be helpful.


  • Liquid Crystal Layer A thin layer containing light-modulating liquid crystals.

  • Color Filters Red, green, and blue filters that give each pixel its color.



Backlight: LCDs do not emit light on their own. The backlight, usually made of LEDs in modern displays,
provides the illumination needed for the screen to be visible.



Liquid Crystal Layer: This layer is made of tiny cells filled with liquid crystals arranged between two
polarizers. These crystals twist or untwist when voltage is applied, regulating how much light passes through.



Color Filters: Every pixel contains three sub-pixelsone red, one green, and one blue. By adjusting how
much light passes through each sub-pixel, the display creates full-color images.






2. Why Liquid Crystals Are Essential




Liquid crystals are unusual materials that behave partly like liquids and partly like solid crystals. Their orientation changes when an electrical signal is applied, allowing them to control the direction and intensity of light.




When voltage changes the alignment of the liquid crystals, the amount of light that reaches each sub-pixel also changes.
This is how an LCD creates different brightness levels, shades, and colors.



Tip: Avoid exposing LCD screens to extreme heat or freezing temperatures, as these can affect the structure and performance of liquid crystals.






3. How Backlighting Works in an LCD Screen




Backlighting is one of the defining characteristics of LCD technology. Without it, the display would appear completely dark.
The backlight system typically includes:




  • An LED or fluorescent light source

  • A diffuser panel that spreads light evenly

  • Polarizers and optical films that direct and shape the light




Light from the backlight passes through the diffuser to create a uniform sheet of illumination. This light then travels through
polarizers, the liquid crystal layer, and the color filters before forming the final image.
If any step in this chain is disrupted, the display can appear dim, uneven, or discolored.






4. How Pixels Are Formed in an LCD Display




Each pixel in an LCD contains three sub-pixelsred, green, and blue. Thin-film transistors (TFTs) act as switches that
control the voltage applied to each sub-pixel. This voltage determines the orientation of the liquid crystals and how much
light reaches each color filter.




By combining different intensities of red, green, and blue light, the display can produce millions of colors. This fine control
is what allows LCD screens to show detailed text, smooth gradients, and sharp images.






5. The Role of Color Filters




Color filters are essential in converting white backlight into full-color images. The process works as follows:




  1. The backlight produces white light.

  2. A polarizer aligns the light waves in one direction.

  3. Light travels through the red, green, and blue sub-pixels.

  4. Liquid crystals adjust the amount of light passing through each sub-pixel.

  5. The combination creates the final visible color.




Every color on the screenfrom bright yellow to deep blueis created by mixing different levels of these three primary colors.






6. Types of LCD Technologies and Their Uses




Although all LCDs use liquid crystals and backlighting, they differ in how the liquid crystals are arranged and controlled.
The three most common LCD types are:



Twisted Nematic (TN)



  • Fast response time

  • More affordable

  • Limited color accuracy and viewing angles




TN panels are common in gaming monitors and budget displays where speed is more important than viewing angles.



In-Plane Switching (IPS)



  • Excellent color reproduction

  • Wide viewing angles

  • Slightly slower response time




IPS displays are used in smartphones, tablets, professional monitors, and applications that require accurate colors.



Vertical Alignment (VA)



  • High contrast levels

  • Better viewing angles than TN

  • Slower response than TN and IPS




VA panels strike a balance between color performance and contrast, making them popular for televisions and general-purpose monitors.






Conclusion




LCD screens remain one of the most widely used display technologies thanks to their reliability, cost-effectiveness,
and strong visual performance. By understanding how backlighting, liquid crystals, and color filters work together,
we gain a clearer picture of the engineering behind the screens we use every day.




Whether used in industrial equipment, consumer electronics, or home appliances, LCD technology continues to evolve,
offering improved brightness, wider viewing angles, and more energy-efficient designs.





Explore more posts:





Previous Post



Next Post


23

nedjelja

studeni

2025

Gemini 3 Officially Launches to a Warm Market Welcome


After months of anticipation and quiet testing with selected partners, Gemini 3 has officially been released.
The new generation brings a noticeable step forward in performance, stability, and ease of integration, and early feedback from
customers and industry partners has been clearly positive.


Gemini3

What Is Gemini 3?




Gemini 3 is the latest iteration in the Gemini product line, designed for teams that need reliable, responsive, and
scalable digital intelligence in their everyday tools and workflows. While previous versions focused mainly on core
capabilities, Gemini 3 shifts the emphasis toward real-world usability: faster responses, smoother integration with
existing systems, and more predictable behavior under heavy load.




Rather than being a single feature or app, Gemini 3 is a complete platform update. It includes improvements in the
underlying engine, new developer tools, and a more refined experience for end users who interact with Gemini-powered
interfaces.



Key Improvements in Gemini 3




With each major release, the Gemini team has aimed to fix long-standing pain points while introducing practical new
capabilities. Gemini 3 continues this trend with a focus on three main areas:



1. Performance and Responsiveness



One of the most noticeable changes in Gemini 3 is how quickly it responds under real-world workloads. Internal testing
and early adopters report shorter response times, smoother handling of concurrent requests, and better consistency
when traffic spikes unexpectedly.



2. Reliability in Production



Gemini 3 has been built with production environments firmly in mind. The update introduces more robust monitoring
hooks, clearer error reporting, and better failover behavior. For teams deploying Gemini into customer-facing products,
these changes reduce operational headaches and make it easier to trust the system at scale.



3. Easier Integration and Tooling



A new generation of SDKs, clearer documentation, and more thoughtful defaults make Gemini 3 easier to integrate into
existing stacks. Developers can move from prototype to production with fewer custom workarounds, and teams can roll out
updates more confidently.



How Gemini 3 Fits into Real-World Workflows




Gemini 3 is already being tested across a wide range of scenarios. Some companies are using it to support internal
knowledge tools, while others are building customer-facing assistants, smarter dashboards, and context-aware interfaces.




  • Customer Support: Helping support teams answer questions faster while keeping humans in control of the final response.

  • Operations and Monitoring: Surfacing relevant information from complex logs and alerts without overwhelming the operator.

  • Content and Documentation: Assisting teams as they draft, review, and refine technical documents or training materials.

  • Internal Tools: Powering chat-style interfaces on top of company knowledge bases, so staff can find information with a few lines of text.




In all of these cases, Gemini 3 is positioned as an assistant rather than a replacement. The goal is to
reduce mechanical work, not to remove people from the loop.



Market Reaction So Far




The first wave of feedback from existing Gemini users has been encouraging. Teams that upgraded from earlier versions
describe Gemini 3 as more predictable and less fragile under everyday use. Many mention that the improvements feel
incremental rather than flashy, but that this is exactly what they wanted: fewer surprises, better stability, and a
smoother experience for both developers and end users.




New customers, especially those in software, manufacturing, and professional services, have shown strong interest in
piloting Gemini 3 as part of their digital transformation plans. For many of them, the appeal lies not in a single
headline feature, but in the combination of:




  • More consistent performance

  • Cleaner integration paths

  • Support for practical, day-to-day use cases




Several partners have already started sharing early case studies, highlighting reduced response times in their customer
workflows and smoother collaboration between human staff and Gemini-backed tools.



Looking Ahead




With the launch of Gemini 3, the roadmap starts to shift from can it be done? to how far can we take it in real
products?. The foundation laid by this release gives teams room to explore new interfaces, connect more systems, and
bring intelligent behavior into places where it previously felt too fragile or experimental.




For now, the focus is on careful rollouts, listening closely to customer feedback, and steadily refining the areas that
matter most in day-to-day work: speed, reliability, and ease of use. If the early market response is any indication,
Gemini 3 is arriving at the right time, with the right balance between ambition and practicality.




As more companies adopt the new platform over the coming months, Gemini 3 is likely to become the default baseline for
projects where intelligent, responsive behavior is no longer a bonus feature, but simply an expected part of the product.



1. Introduction



The Rockchip RK3576 is a next-generation ARM-based SoC designed for powerful yet power-efficient embedded devices.
One of its most important components is the integrated NPU (Neural Processing Unit), which provides dedicated hardware
acceleration for artificial intelligence workloads such as image recognition, object detection, voice processing, and
other machine learning tasks at the edge.



In many modern embedded systems, the CPU and GPU are no longer enough to handle deep learning models efficiently.
Instead, the NPU takes over the heavy tensor computations, allowing real-time AI inference with lower latency and lower power consumption.
This makes the RK3576 particularly attractive for smart panels, industrial HMIs, home automation gateways, retail terminals,
and AI-enabled IoT devices.



RK3576

2. Overview of the RK3576 SoC



The RK3576 is built around a multi-core ARM Cortex-A application cluster (for example, big.LITTLE combinations) with an integrated GPU
and a dedicated NPU. While the exact configuration may vary depending on Rockchips final product documentation and board design,
the typical feature set includes:



  • Multi-core ARM Cortex-A CPU for application and OS tasks (Android or Linux).

  • Integrated GPU for 2D/3D graphics and UI acceleration.

  • Dedicated NPU for deep learning inference acceleration.

  • Support for high-resolution displays (MIPI, LVDS, eDP, HDMI, or RGB, depending on the board).

  • Multiple camera interfaces for vision-based applications.

  • Comprehensive I/O: USB, Ethernet, UART, SPI, IC, GPIO, PCIe and others depending on the hardware platform.



In this architecture, the CPU focuses on general-purpose logic and system control, the GPU handles graphics and rendering,
and the NPU is responsible for neural network operations. This separation of tasks is key to achieving good real-time performance
without overloading any single processing element.



3. What the RK3576 NPU Does



The NPU in the RK3576 is designed specifically for accelerating deep learning inference, not training. Typical workloads include:



  • Image classification (for example, recognizing product types or detecting fault states).

  • Object detection and tracking (for cameras, safety zones, people counting, etc.).

  • Face detection and basic face recognition in smart terminals.

  • Gesture recognition or pose estimation in user interaction scenarios.

  • Voice wake-up or keyword spotting when combined with audio input.



By moving these operations from the CPU to the NPU, the system can:



  • Run more complex models in real time.

  • Reduce overall CPU load and keep UI and system tasks responsive.

  • Lower power consumption, which is critical for fanless or compact devices.



4. Supported AI Frameworks and Model Flow



Rockchip typically provides a toolchain and SDK for deploying neural network models onto the NPU.
Although the exact tool versions and framework support depend on the official Rockchip release, the general flow is similar:



  1. Develop and train your model in a mainstream framework such as TensorFlow, PyTorch, or ONNX-based workflows.

  2. Export the trained model to a supported interchange format (for example, ONNX or TensorFlow Lite).

  3. Use Rockchips conversion tools to compile and quantize the model into an NPU-friendly format.

  4. Integrate the compiled model into your application using the Rockchip NPU SDK and runtime libraries.

  5. Deploy and test on the RK3576-based hardware platform, profiling performance and adjusting input resolutions or model complexity as needed.



A typical application stack on RK3576 might look like this:



  • Operating system: Android or embedded Linux (Buildroot / Yocto based BSP).

  • Application framework: C/C++, Java/Kotlin (Android), or Python/C bindings depending on the use case.

  • AI runtime: Rockchip NPU runtime API, often wrapped in a higher-level inference engine.

  • Hardware: RK3576 SBC or custom mainboard with appropriate peripherals (camera, display, sensors).



5. Performance Factors and Design Considerations



The raw TOPS (tera-operations per second) number of the NPU is only part of the story.
Real-world performance depends on multiple factors:



  • Model architecture: Lightweight models like MobileNet, EfficientNet-Lite, and YOLO-tiny variants often perform better on embedded NPUs.

  • Input resolution: Reducing input image size (for example, from 1080p to 720p or 640480) can significantly increase inference speed.

  • Quantization: INT8 or low-precision quantization is usually required for maximum NPU throughput.

  • Memory bandwidth: Efficient use of DDR and on-chip buffers avoids bottlenecks.

  • Pipeline design: Overlapping image capture, preprocessing, NPU inference, and post-processing can reduce end-to-end latency.



For system designers, it is important to profile the entire pipeline instead of only looking at NPU benchmark numbers.
A well-balanced design ensures:



  • The CPU is not blocked by preprocessing and communication overhead.

  • The GPU can still handle UI tasks smoothly while the NPU is loaded.

  • The thermal design can sustain continuous NPU load in real-world ambient temperatures.



6. Typical Use Cases of RK3576 NPU in Embedded Products



The RK3576 NPU is aimed at products that need on-device intelligence without relying on cloud servers.
Some representative scenarios include:



6.1 Smart Control Panels and HMI Devices



In smart home or building automation panels, the NPU can be used for:



  • Face recognition or presence detection for personalized UI and access control.

  • Gesture detection for touchless control in kitchens, bathrooms, or medical environments.

  • Local voice keyword detection to wake up the system without constant cloud connectivity.



6.2 Industrial Vision and Quality Inspection



In industrial settings, the RK3576 can be paired with one or more cameras to perform:



  • Defect detection on production lines.

  • Reading barcodes or QR codes under challenging lighting conditions.

  • Monitoring safety zones to detect human presence near dangerous machines.



6.3 Retail, Kiosks, and Vending Machines



Retail terminals and kiosks benefit from local AI in several ways:



  • Customer behavior analysis (people counting, dwell time estimation).

  • Product recognition for self-checkout or smart vending machines.

  • Anonymous demographics estimation to analyze store traffic patterns.



6.4 Edge Gateways and Smart Cameras



For edge gateways and smart cameras, the RK3576 NPU allows:



  • Running detection models locally and only sending metadata to the cloud.

  • Reducing bandwidth usage and improving privacy.

  • Maintaining system functionality even with unreliable network connections.



7. Software Integration: Linux and Android



The RK3576 is typically supported by both Android and Linux BSPs.
From a software engineers perspective, the NPU integration looks slightly different on each OS:



  • On Android: AI workloads may be integrated through native code (JNI), Rockchips AI SDK, or higher-level frameworks depending on the BSP.
    The application can combine NPU inference with GPU-accelerated UI and multimedia features.

  • On Linux: Developers usually work with C/C++ libraries and command-line tools to deploy and test models.
    This is common for headless devices or industrial HMI systems built with Qt, GTK, or web-based frontends.



In both environments, careful packaging of models, runtime libraries, and firmware is required to ensure reliable updates across product lifecycles.



8. Design Tips for Using RK3576 NPU in Products



When you plan a new product based on RK3576, it is helpful to consider the following early in the design phase:



  • Define clear AI use cases: Start with a small number of focused AI features rather than trying to use the NPU for everything.

  • Choose hardware-friendly models: Use models known to run efficiently on embedded NPUs, and avoid extremely heavy architectures.

  • Plan for updates: Make sure your software and storage layout support updating models and NPU runtimes in the field.

  • Test under thermal stress: Verify NPU performance at maximum ambient temperature and under continuous load.

  • Integrate with the UI: For HMI devices, align NPU-based features with UI/UX design so that AI functions feel natural to end users.



9. Conclusion



The Rockchip RK3576 NPU brings dedicated AI acceleration to embedded and edge devices, enabling real-time inference that would be difficult or inefficient on CPU and GPU alone.
By combining a multi-core ARM processor, GPU, and NPU on a single SoC, RK3576 offers a strong platform for smart displays, industrial HMIs, retail terminals, and intelligent gateways.



For product teams, the key to unlocking the value of the RK3576 NPU is not just its theoretical performance, but how well the entire system is architected:
model choice, pipeline design, thermal management, and long-term software maintenance all matter.
When these elements are planned together, the RK3576 NPU can significantly shorten response times, reduce cloud dependency, and deliver a smoother, more intelligent experience in modern embedded systems.




Explore more posts:





Previous Post



Next Post


21

petak

studeni

2025

Rockchip RK3576 NPU: AI Acceleration for Embedded Systems

1. Introduction



The Rockchip RK3576 is a next-generation ARM-based SoC designed for powerful yet power-efficient embedded devices.
One of its most important components is the integrated NPU (Neural Processing Unit), which provides dedicated hardware
acceleration for artificial intelligence workloads such as image recognition, object detection, voice processing, and
other machine learning tasks at the edge.



In many modern embedded systems, the CPU and GPU are no longer enough to handle deep learning models efficiently.
Instead, the NPU takes over the heavy tensor computations, allowing real-time AI inference with lower latency and lower power consumption.
This makes the RK3576 particularly attractive for smart panels, industrial HMIs, home automation gateways, retail terminals,
and AI-enabled IoT devices.



RK3576

2. Overview of the RK3576 SoC



The RK3576 is built around a multi-core ARM Cortex-A application cluster (for example, big.LITTLE combinations) with an integrated GPU
and a dedicated NPU. While the exact configuration may vary depending on Rockchips final product documentation and board design,
the typical feature set includes:



  • Multi-core ARM Cortex-A CPU for application and OS tasks (Android or Linux).

  • Integrated GPU for 2D/3D graphics and UI acceleration.

  • Dedicated NPU for deep learning inference acceleration.

  • Support for high-resolution displays (MIPI, LVDS, eDP, HDMI, or RGB, depending on the board).

  • Multiple camera interfaces for vision-based applications.

  • Comprehensive I/O: USB, Ethernet, UART, SPI, IC, GPIO, PCIe and others depending on the hardware platform.



In this architecture, the CPU focuses on general-purpose logic and system control, the GPU handles graphics and rendering,
and the NPU is responsible for neural network operations. This separation of tasks is key to achieving good real-time performance
without overloading any single processing element.



3. What the RK3576 NPU Does



The NPU in the RK3576 is designed specifically for accelerating deep learning inference, not training. Typical workloads include:



  • Image classification (for example, recognizing product types or detecting fault states).

  • Object detection and tracking (for cameras, safety zones, people counting, etc.).

  • Face detection and basic face recognition in smart terminals.

  • Gesture recognition or pose estimation in user interaction scenarios.

  • Voice wake-up or keyword spotting when combined with audio input.



By moving these operations from the CPU to the NPU, the system can:



  • Run more complex models in real time.

  • Reduce overall CPU load and keep UI and system tasks responsive.

  • Lower power consumption, which is critical for fanless or compact devices.



4. Supported AI Frameworks and Model Flow



Rockchip typically provides a toolchain and SDK for deploying neural network models onto the NPU.
Although the exact tool versions and framework support depend on the official Rockchip release, the general flow is similar:



  1. Develop and train your model in a mainstream framework such as TensorFlow, PyTorch, or ONNX-based workflows.

  2. Export the trained model to a supported interchange format (for example, ONNX or TensorFlow Lite).

  3. Use Rockchips conversion tools to compile and quantize the model into an NPU-friendly format.

  4. Integrate the compiled model into your application using the Rockchip NPU SDK and runtime libraries.

  5. Deploy and test on the RK3576-based hardware platform, profiling performance and adjusting input resolutions or model complexity as needed.



A typical application stack on RK3576 might look like this:



  • Operating system: Android or embedded Linux (Buildroot / Yocto based BSP).

  • Application framework: C/C++, Java/Kotlin (Android), or Python/C bindings depending on the use case.

  • AI runtime: Rockchip NPU runtime API, often wrapped in a higher-level inference engine.

  • Hardware: RK3576 SBC or custom mainboard with appropriate peripherals (camera, display, sensors).



5. Performance Factors and Design Considerations



The raw TOPS (tera-operations per second) number of the NPU is only part of the story.
Real-world performance depends on multiple factors:



  • Model architecture: Lightweight models like MobileNet, EfficientNet-Lite, and YOLO-tiny variants often perform better on embedded NPUs.

  • Input resolution: Reducing input image size (for example, from 1080p to 720p or 640480) can significantly increase inference speed.

  • Quantization: INT8 or low-precision quantization is usually required for maximum NPU throughput.

  • Memory bandwidth: Efficient use of DDR and on-chip buffers avoids bottlenecks.

  • Pipeline design: Overlapping image capture, preprocessing, NPU inference, and post-processing can reduce end-to-end latency.



For system designers, it is important to profile the entire pipeline instead of only looking at NPU benchmark numbers.
A well-balanced design ensures:



  • The CPU is not blocked by preprocessing and communication overhead.

  • The GPU can still handle UI tasks smoothly while the NPU is loaded.

  • The thermal design can sustain continuous NPU load in real-world ambient temperatures.



6. Typical Use Cases of RK3576 NPU in Embedded Products



The RK3576 NPU is aimed at products that need on-device intelligence without relying on cloud servers.
Some representative scenarios include:



6.1 Smart Control Panels and HMI Devices



In smart home or building automation panels, the NPU can be used for:



  • Face recognition or presence detection for personalized UI and access control.

  • Gesture detection for touchless control in kitchens, bathrooms, or medical environments.

  • Local voice keyword detection to wake up the system without constant cloud connectivity.



6.2 Industrial Vision and Quality Inspection



In industrial settings, the RK3576 can be paired with one or more cameras to perform:



  • Defect detection on production lines.

  • Reading barcodes or QR codes under challenging lighting conditions.

  • Monitoring safety zones to detect human presence near dangerous machines.



6.3 Retail, Kiosks, and Vending Machines



Retail terminals and kiosks benefit from local AI in several ways:



  • Customer behavior analysis (people counting, dwell time estimation).

  • Product recognition for self-checkout or smart vending machines.

  • Anonymous demographics estimation to analyze store traffic patterns.



6.4 Edge Gateways and Smart Cameras



For edge gateways and smart cameras, the RK3576 NPU allows:



  • Running detection models locally and only sending metadata to the cloud.

  • Reducing bandwidth usage and improving privacy.

  • Maintaining system functionality even with unreliable network connections.



7. Software Integration: Linux and Android



The RK3576 is typically supported by both Android and Linux BSPs.
From a software engineers perspective, the NPU integration looks slightly different on each OS:



  • On Android: AI workloads may be integrated through native code (JNI), Rockchips AI SDK, or higher-level frameworks depending on the BSP.
    The application can combine NPU inference with GPU-accelerated UI and multimedia features.

  • On Linux: Developers usually work with C/C++ libraries and command-line tools to deploy and test models.
    This is common for headless devices or industrial HMI systems built with Qt, GTK, or web-based frontends.



In both environments, careful packaging of models, runtime libraries, and firmware is required to ensure reliable updates across product lifecycles.



8. Design Tips for Using RK3576 NPU in Products



When you plan a new product based on RK3576, it is helpful to consider the following early in the design phase:



  • Define clear AI use cases: Start with a small number of focused AI features rather than trying to use the NPU for everything.

  • Choose hardware-friendly models: Use models known to run efficiently on embedded NPUs, and avoid extremely heavy architectures.

  • Plan for updates: Make sure your software and storage layout support updating models and NPU runtimes in the field.

  • Test under thermal stress: Verify NPU performance at maximum ambient temperature and under continuous load.

  • Integrate with the UI: For HMI devices, align NPU-based features with UI/UX design so that AI functions feel natural to end users.



9. Conclusion



The Rockchip RK3576 NPU brings dedicated AI acceleration to embedded and edge devices, enabling real-time inference that would be difficult or inefficient on CPU and GPU alone.
By combining a multi-core ARM processor, GPU, and NPU on a single SoC, RK3576 offers a strong platform for smart displays, industrial HMIs, retail terminals, and intelligent gateways.



For product teams, the key to unlocking the value of the RK3576 NPU is not just its theoretical performance, but how well the entire system is architected:
model choice, pipeline design, thermal management, and long-term software maintenance all matter.
When these elements are planned together, the RK3576 NPU can significantly shorten response times, reduce cloud dependency, and deliver a smoother, more intelligent experience in modern embedded systems.




Explore more posts:





Previous Post



Next Post


19

srijeda

studeni

2025

Understanding FD-SOI Technology: A Modern Approach to Low-Power and High-Efficiency Semiconductor Design


Fully Depleted Silicon-On-Insulator (FD-SOI) technology has emerged as one of the most efficient semiconductor process platforms for applications that require ultra-low power consumption, high energy efficiency, and strong performance in harsh environments. Compared with traditional bulk CMOS and FinFET technologies, FD-SOI provides a unique balance of cost, power, and analog/mixed-signal performancemaking it particularly attractive for IoT devices, edge AI processors, automotive electronics, and aerospace applications.



FD-SOI Technology

1. What Is FD-SOI?



FD-SOI stands for Fully Depleted Silicon-On-Insulator, a semiconductor fabrication technique that places a thin silicon layer on top of a buried oxide (BOX) layer. Because the silicon layer is extremely thin, the transistor channel becomes fully depleted, meaning that no residual charges remain inside the channel area. This allows the transistor to operate with far less leakage and better control.




A simplified FD-SOI stack includes:




  • Ultra-thin top silicon layer

  • Buried oxide (BOX) insulation layer

  • Silicon substrate




This structure improves electrostatic control while keeping the process highly planar and compatible with existing manufacturing equipment.



2. Why FD-SOI Matters: Key Advantages



2.1 Ultra-Low Leakage Power



Because the transistor channel is fully depleted, leakage current drops significantlyoften by an order of magnitude compared with bulk CMOS. This is critical for battery-powered devices, wearables, and long-running IoT sensors where energy consumption must be minimized.



2.2 Body Biasing for Performance Tuning



One of the signature features of FD-SOI is its ability to use Dynamic Body Biasing (DBB). Engineers can apply forward bias to boost performance or reverse bias to dramatically reduce leakage. This tuning capability enables:




  • Adaptive performance based on workload

  • Near-zero standby power

  • Greater flexibility in power-sensitive designs




In contrast, FinFET processes either lack body biasing or support it with very limited effectiveness.



2.3 Better Analog, RF, and Mixed-Signal Performance



FD-SOI exhibits excellent linearity, low noise, and predictable behavior, making it ideal for:




  • RF transceivers

  • 5G/6G modems

  • Automotive radar front ends

  • Mixed-signal sensor interfaces




The insulating buried oxide layer reduces parasitic capacitance and minimizes substrate couplingideal for sensitive analog and high-frequency designs.



2.4 Radiation Tolerance and Reliability Advantages



The insulating BOX layer provides strong resistance to single-event upsets and latch-up effects. This makes FD-SOI attractive for aerospace, medical, defense, and automotive safety systems.



2.5 Lower Cost Than FinFET



FD-SOI avoids the 3D manufacturing complexity of FinFETs. Its planar process flow:




  • Reduces mask count

  • Lowers manufacturing cost

  • Improves yield

  • Works with more mature tools and fabs




This makes FD-SOI a strong candidate for mid-performance chips that do not require the extreme density of FinFETs.



3. FD-SOI Compared with Bulk CMOS and FinFET








































Bulk CMOS FD-SOI FinFET
Power Consumption High leakage Very low leakage, tunable Low leakage but higher dynamic power
Performance Moderate High with body bias Very high
Manufacturing Cost Low Lower than FinFET High
Analog/RF Performance Moderate Excellent Poorer due to 3D structure
Radiation/Noise Immunity Low High Moderate


4. Typical Applications for FD-SOI



FD-SOI is not designed to replace FinFET in high-end CPUs or AI accelerators. Instead, it dominates markets where power efficiency, analog integration, and environmental resilience matter.




  • IoT and edge devices smart sensors, wearables, home automation

  • Automotive electronics ADAS, radar, infotainment ECUs

  • RF and communication chips 5G modems, GNSS, Wi-Fi

  • Industrial and medical devices long-life embedded systems

  • Aerospace and defense radiation-hard electronics



5. Why FD-SOI Is Growing Again



The global shift toward battery-powered and ultra-efficient devices has renewed interest in FD-SOI. Companies such as STMicroelectronics, GlobalFoundries, and Samsung have expanded their FD-SOI manufacturing lines, offering nodes like 28nm, 22nm, and 18nm.




Key market drivers include:




  • Edge AI (requires efficient on-device processing)

  • 5G/6G radios (demand excellent RF behavior)

  • Automotive functional safety

  • Low-power industrial sensors




FD-SOIs balance of power, cost, and analog performance positions it uniquely between mature CMOS processes and cutting-edge FinFET nodes.



6. Conclusion



FD-SOI technology offers a compelling set of advantages for modern semiconductor design. Its ultra-low leakage, body-bias tuning, strong analog/RF characteristics, and superior radiation tolerance make it ideal for IoT, automotive, industrial, and aerospace devices. While FinFET remains the choice for high-performance logic, FD-SOI has secured its place in applications requiring efficiency, reliability, and mixed-signal integration. As demand for low-power intelligent devices grows, FD-SOI is expected to play an increasingly important role in the semiconductor ecosystem.




Explore more posts:





Previous Post



Next Post


STM32V8: A New Era of High-Performance Microcontrollers with 18nm Technology and Arm Cortex-M8


STMicroelectronics has introduced a major leap in microcontroller technology with the launch
of the STM32V8 series, a next-generation family of MCUs built using an advanced
18nm process and powered by the Arm Cortex-M85 core.
This new architecture significantly elevates the performance profile of the STM32 ecosystem,
bridging the gap between traditional microcontrollers and entry-level application processors.


STM32V8 MCU


For developers working in industrial automation, edge AI, robotics, automotive electronics,
and high-performance embedded systems, the STM32V8 is positioned as one of the most impactful
MCU releases in recent years.



Why the STM32V8 Platform Represents a Breakthrough




Microcontrollers traditionally rely on semiconductor nodes ranging from 40nm to 90nm,
a limitation that directly affects energy efficiency, computational throughput, and
peripheral integration. By transitioning to 18nm FD-SOI,
STMicroelectronics has set a new benchmark for the MCU market.
This aggressive shrink in process geometry opens the door to higher transistor density,
lower leakage current, and improved overall performance.




Below are the core technological advancements that define the STM32V8 generation:




  • 18nm FD-SOI fabrication delivering unmatched power-performance efficiency

  • Arm Cortex-M85 core offering a significant step up from M7-based systems

  • Arm Helium vector extensions enhancing DSP and AI acceleration

  • Advanced TrustZone-M architecture for enhanced IoT and industrial security

  • High-bandwidth memory subsystem with fast cache and SRAM

  • Modernized connectivity, including Ethernet TSN, USB HS, and CAN FD




These improvements enable the STM32V8 to operate at performance levels once reserved
for low-end application processorswhile retaining the predictable real-time behavior
and low energy consumption characteristic of microcontroller-based systems.



The Significance of the 18nm Process Node




One of the most transformative features of the STM32V8 series is its shift to an
18nm manufacturing node. Compared with earlier STM32 products manufactured on 40nm or larger
nodes, the benefits are substantial and immediately visible in real-world applications.




  • Higher transistor density enabling richer peripheral sets and more internal memory

  • Lower leakage and dynamic power improving efficiency in battery-powered devices

  • Higher achievable clock frequencies opening the door to near-GHz MCU performance

  • Cleaner analog signal behavior resulting in lower noise for precision systems




For comparison, the popular STM32H7based on a 40nm nodealready set a high standard for
performance in an MCU. STM32V8 now extends that boundary by delivering greater processing power,
better thermal behavior, and significantly improved energy metrics.



Arm Cortex-M85: The Most Powerful M-Class Core to Date




At the heart of the STM32V8 platform is the Arm Cortex-M85, currently the most capable core
in Arms M-series lineup. The architecture integrates the Helium vector extension,
a technology previously introduced in the Cortex-M55 to dramatically boost DSP and ML workloads.




Key improvements over Cortex-M7 include:




  • Up to 6 improvement in DSP operations

  • Up to 3 improvement in machine learning inference performance

  • Enhanced floating-point unit with more efficient pipeline operations

  • Better real-time determinism for industrial control and robotics

  • Advanced TrustZone-M for secure partitioning of critical workloads




With these enhancements, STM32V8 MCUs can handle complex workloads such as motion control loops,
multi-sensor fusion, anomaly detection, and advanced filteringdomains that previously required
dedicated DSPs or specialized co-processors.



Next-Generation Memory Architecture




The STM32V8 series introduces a refined memory subsystem designed to minimize latency and maximize
instruction throughput. Built around the advantages of 18nm technology, this architecture helps
sustain high clock speeds without encountering bottlenecks typical of older MCU designs.




  • High-speed instruction cache enabling faster execution of complex code

  • Tightly-coupled memory for real-time and safety-critical routines

  • Larger on-chip SRAM tailored for AI and DSP applications

  • Fast NVM providing rapid boot sequences and secure updates



Industrial-Grade Security and Trust




As cyber-security becomes a foundational requirement in industrial and IoT systems,
the STM32V8 incorporates a comprehensive set of hardware-level protections:




  • Arm TrustZone-M enabling the separation of secure and non-secure domains

  • Secure boot and secure firmware update mechanisms

  • Cryptographic acceleration for AES, SHA, ECC, and other algorithms

  • Protected key storage and anti-tamper features

  • Robust memory protection units for high-reliability systems




These features make the STM32V8 a strong candidate for industries where reliability and security
are non-negotiable, such as healthcare, financial systems, industrial automation,
and mission-critical IoT deployments.



Modern Connectivity for Advanced Embedded Applications




Connectivity is a key strength of the STM32V8 family. Engineers building next-generation devices will find:




  • Ethernet with TSN for deterministic industrial networking

  • USB High-Speed for data-rich peripherals

  • CAN FD for automotive and robotic communication

  • Flexible SPI/QSPI/OSPI for external memory expansion

  • High-precision analog peripherals suited for control and measurement




The range of peripherals makes the STM32V8 suitable for distributed smart factories,
vehicle subsystems, high-speed instrumentation, and edge computing gateways.



How STM32V8 Compares with STM32H7







































Feature STM32H7 STM32V8
Process Node 40nm 18nm
Core Cortex-M7 Cortex-M85
DSP / AI Capability Moderate High (Helium + enhanced FPU)
Security Basic TrustZone Advanced Secure Architecture
Power Efficiency Good Excellent due to 18nm
Target Application General High-Performance MCU Industrial AI, Robotics, Advanced Embedded


Applications Where STM32V8 Will Have the Biggest Impact



1. Industrial Automation



With its TSN-enabled Ethernet, improved timing accuracy, and strong processing performance,
the STM32V8 is well-suited for PLCs, servo drives, factory controllers, industrial sensors,
and real-time automation systems.



2. Edge AI and Machine Learning



The Cortex-M85 with Helium allows the MCU to run:




  • Neural network inference models

  • High-speed anomaly detection

  • Multi-sensor fusion

  • Predictive maintenance algorithms



3. Automotive Subsystems



Enhanced security, CAN FD, and deterministic compute performance make STM32V8 suitable for:




  • Vehicle body controllers

  • Sensor hubs

  • Real-time safety monitors

  • Gateway modules



4. Advanced Consumer Electronics



Smart appliances, interactive displays, and responsive user interfaces benefit from the MCUs
high compute capability and efficient power budget.



5. Medical Devices and Healthcare Electronics



With precision ADCs, stable timing, and secure data handling, STM32V8 is a strong fit for
biosignal monitoring, diagnostic instruments, and portable medical platforms.



Why STM32V8 Matters for the Future




The STM32V8 series is more than an incremental updateit represents a significant evolution
in how microcontroller-class devices can be used. By merging near-processor-class performance
with robust security, improved memory architecture, and ultra-efficient power consumption,
the STM32V8 opens the door to new categories of intelligent embedded systems.




For developers, the benefits are considerable:




  • Run more sophisticated algorithms directly on MCU hardware

  • Reduce dependence on external accelerators

  • Lower system BOM for high-performance applications

  • Maintain compatibility with the established STM32 ecosystem



Conclusion




With its innovative 18nm process, Cortex-M85 architecture, advanced security features,
and next-generation connectivity, the STM32V8 series sets a new benchmark in the MCU world.
It is designed to meet the rising demands of edge computing, industrial automation,
AI-enabled robotics, and advanced embedded electronics.




As developers begin exploring the STM32V8s capabilities, it is clear that this platform
will play a central role in powering the next generation of intelligent, energy-efficient,
and high-performance embedded systems.




Explore more posts:





Previous Post



Next Post


18

utorak

studeni

2025

Recent Movie Reflections: Small Thoughts After a Few Quiet Nights

movie


This month has been surprisingly calm. Work is still busy, the weather is getting colder, and the
streets feel a bit quieter than usual. Maybe because of that, I ended up spending a few evenings
watching movies nothing planned, nothing thematic, just whatever felt right at the moment.
Sometimes a movie becomes a mirror for whatever mood were in, and lately Ive enjoyed that feeling
of quiet reflection.




I dont consider myself a serious film person, but I do like noticing small emotions, small details,
and small moments that linger long after the credits roll. These past few weeks, three movies stayed
with me for different reasons, and I wanted to write a little about them not as reviews, but more
as personal notes.






1. A movie that reminded me of the rhythm of ordinary days




The first movie I watched was a slow-paced drama about a middle-aged woman rediscovering parts of
her life she once ignored. Nothing dramatic happens. No explosions, no twists, no fast edits.
Instead, the film focuses on routines: morning coffee, grocery shopping, casual conversations,
walking home in rain, small disappointments, and unexpected kindness from strangers.




What I loved most is how the film captured the beauty of unremarkable days. It made me think of
my own routines the caf I always visit, the small park I pass through, the familiar supermarket
aisle, the way sunlight hits the building near my apartment at around 4 PM. Sometimes we forget that
life is built from these small repeating pieces.




There was a scene where the main character quietly watches people inside a bus stop. For a moment,
nothing happens. But that stillness felt like a reminder: people around us all carry their private
stories, even when we dont notice.






2. A movie that made me think about connections and distance




The second film was completely different a story about two friends who grew apart over ten years.
The plot jumps between past and present, showing how people change without realizing it. I found
myself thinking about my own friendships, especially the ones that slowly drifted away without
conflict, without drama, just life pulling us in different directions.




What struck me most is how the film treated silence. Not the dramatic kind, but the soft silence
between people who havent talked in a long time. The silence that feels both comforting and
slightly sad. That feeling is hard to describe, but the movie captured it well the kind of quiet
you only share with someone who used to matter.




One line stayed with me: Some people arent meant to stay forever, but they shape who we become.
Its simple, but it resonated deeply that night.






3. A visually stunning film that pulled me into another world




The last movie I watched recently was more artistic full of vivid colors, unusual camera angles,
and dreamlike transitions. Every frame felt like a painting, and the sound design added a surreal
quality to the atmosphere. I didnt understand every symbolic detail, but maybe thats not the
point. Some films are meant to be felt more than analyzed.




There was a scene where the main character walks through a corridor covered in soft blue light,
with shadows slowly shifting behind him. It reminded me of how our minds often mix reality and
memory. Sometimes a place you once visited feels dreamlike when you think back to it like a
fragment of a different world.




I realized that I enjoy these visually expressive films because they interrupt my usual thinking
patterns. For a moment, they make me feel present, almost like meditation. Not everything needs a
logical explanation; sometimes its enough for something to simply feel beautiful.






What these movies left me thinking




Its interesting how movies can influence our mindset without us noticing. Over the past few weeks,
I found myself slowing down a bit. I walk more slowly, I look at the sky more often, I pay attention
to peoples expressions, and I notice sounds from my neighborhood that I usually ignore.




Maybe its because the movies I watched all deal with subtle emotions routines, relationships,
distance, memory, and quiet inner changes. They made me think about how we move through life, how
we hold onto certain people, how we let go, and how we find meaning in small things.




Movies dont have to be extraordinary to leave an impact. Sometimes the ones that stay with us are
not the loud or dramatic ones, but the gentle ones that create a small shift inside us.






A few final thoughts




Im planning to continue this small habit of watching movies on quiet evenings. Not to write reviews
or follow trends, but simply to enjoy the experience of seeing stories unfold. In a world where
everything feels fast and noisy, these slow moments feel precious.




Maybe next month Ill explore older films, or documentaries, or something completely random.
But for now, Im grateful for these few quiet nights, these few stories, and the calm feeling they
left behind.




If youve watched something recently that made you pause or think, Id love to hear about it.
Sometimes another persons recommendation leads to a movie we never expected to like and thats
one of the best parts of exploring films.




Explore more posts:





Previous Post



Next Post


16

nedjelja

studeni

2025

A Peaceful Walk Around Zagrebs Upper Town


During my short visit to Croatia, one of the places that left a strong impression on me was
Zagrebs Upper Town, known locally as Gornji Grad. It is one of the oldest parts of the city,
filled with narrow streets, colorful buildings, and a calm atmosphere that made me slow down
and enjoy the moment.



Zagrebs Upper Town

The Iconic St. Marks Church




The first place I visited was St. Marks Church, famous for its colorful tiled roof. I had
seen photos before, but standing in front of it felt completely different. The square was
quiet, and the sound of the wind passing through the streets made the whole area feel almost
timeless.



Exploring the Small Streets




As I walked around, I found that Upper Town was full of charming corners. The small cafs,
historic lamps, and old stone walls created a relaxing environment. Even though it's located
in the center of the capital, the area feels peaceful, almost like a small village.



The View from the Strossmayer Promenade




My favorite part of the walk was the Strossmayer Promenade. From there, I could see the lower
part of Zagreb, with its rooftops and church towers stretching into the distance. There were
artists selling paintings, musicians playing soft melodies, and a few couples enjoying the
view.



A Simple but Memorable Experience




Upper Town is not a place where you rush from one attraction to another. Instead, it invites
you to move slowly, observe quietly, and enjoy the surroundings. It was one of the calmest
afternoons I had during my stay in Croatia, and I would love to return again to explore more
hidden spots in the area.




Explore more posts:





Previous Post



Next Post


<< Prethodni mjesec | Sljedei mjesec >>

Creative Commons License
Ovaj blog je ustupljen pod Creative Commons licencom Imenovanje-Dijeli pod istim uvjetima.