logo

Shanghai Neardi Technology Co., Ltd. sales@neardi.com 86-021-20952021

About Us
Why Choose Us
Neardi all technicians focus on design develop, and manufacture trusted Systems on Modules, Single Board computers, and Embedded computers.
View More
Shanghai Neardi Technology Co., Ltd.

HIGH QUALITY

Trust Seal, Credit Check, RoSH and Supplier Capability Assessment. company has strictly quality control system and professional test lab.
Shanghai Neardi Technology Co., Ltd.

DEVELOPMENT

Internal professional design team and advanced machinery workshop. We can cooperate to develop the products you need.
Shanghai Neardi Technology Co., Ltd.

MANUFACTURING

Advanced automatic machines, strictly process control system. We can manufacture all the Electrical terminals beyond your demand.
Shanghai Neardi Technology Co., Ltd.

100% SERVICE

Bulk and customized small packaging, FOB, CIF, DDU and DDP. Let us help you find the best solution for all your concerns.

2014

Year Established

100+

Employees

10000+

Customers Served

1000000+

Annual Sales

Our Products

Featured Products

China Shanghai Neardi Technology Co., Ltd.
Contact Us
Contact at Any Time
Send

Shanghai Neardi Technology Co., Ltd.

Address: Room 807, Building 1, Lane 1505, Lianhang Road, Minhang District, Shanghai
Phone: 86-021-20952021
Our Products
Top Products
Our Cases
Recent Industrial Projects
Lastest company cases about Control board of the smart full-length mirror
2024/12/13
Control board of the smart full-length mirror
Smart mirrors have become a new product category, such as smart mirrors for bathrooms, smart fitness mirrors in the fitness field, fitting mirrors for retail stores, and beauty mirrors. Many related startups are also favored by capital. Neardi can provide a full set of technical solutions such as smart full-body mirror control motherboards, and can customize products for customers on demand based on ARM platforms such as RK3568, RK3399, and RK3326, so as to facilitate and quickly implement related projects, allowing brands and operators to reduce R&D risks and focus on product operations.   The biggest difference between smart mirrors and ordinary mirrors is that they have built-in motherboards, cameras, sensors and other electronic accessories, and operating systems. They can develop related software according to scene requirements, and most smart mirrors have built-in touch screens, which greatly enrich the interactive scenes. There are three main application directions for the products currently available: smart home mirrors, which are generally placed in bathrooms or dressing tables. This type of product is mainly used for interaction and information broadcasting of smart home products, and is considered a network portal. Smart fitness mirrors for the fitness industry, which are mainly used for online fitness training that does not require large equipment such as yoga and dance, are mainly used to create a free online fitness platform to sell courses and related accessories and services. Fitting mirrors and makeup mirrors for the retail industry use virtual images to display the effect of makeup, reduce time costs, and promote transactions.   Among the categories of smart full-body mirrors, fitting mirrors and makeup mirrors are the earliest products to be launched. Fitting mirrors use mixed reality technology to capture the customer's body features through cameras, and output the customer's selected clothing combinations to the display in real time, so that customers can quickly see the effect of the selected clothing on the body, greatly shortening the fitting time, improving sales efficiency, and promoting transactions. The same is true for makeup mirrors, but compared with fitting mirrors, makeup mirrors collect more facial features for processing. Although fitting mirrors and makeup mirrors were launched early, the early processors had poor performance, data processing was not smooth, and realistic demonstration effects could not be presented, making it difficult to provide a good user experience. Therefore, these two products were more responsible for marketing functions in the early days, for customers to experience and attract customer traffic.   With the improvement of processor performance and the rapid popularization of 5G networks, today's processor performance is already very good. With the help of artificial intelligence technology and dedicated NPU processors, image processing can be completed quickly. With the help of 5G communication, complex calculations and rendering work can be handed over to cloud servers. The client displays the returned data in real time to provide an excellent experience. Therefore, most of the fitting mirrors and beauty mirrors on the market are undergoing iterative upgrades.   The development of the fitness industry reflects the economic level of a country to a certain extent. When the economy develops to a certain level, the people pay more attention to health and are more likely to accept professional fitness training. This is why many gyms are opened in economically developed areas or downtown areas, because their target customers are here. However, in order to attract customers, most gyms are equipped with a variety of large equipment, and the venue occupies a large area, which invisibly increases the operating costs. Although more and more people in China are beginning to pay attention to fitness, the popularity of fitness training is far less than that in Europe and the United States, and the operation of professional gyms is very difficult. If the venue is set in the suburbs, it is not convenient for customers to come to exercise. Therefore, many companies engaged in fitness training are exploring online training models, which can not only reduce operating costs, but also save the commuting costs of coaches and students, which is very convenient. The outbreak in 2019 also made people spend more time at home. Online video communication has become a way of communication accepted by the public, and home fitness has become a normal practice, which has played a certain role in promoting online fitness training.   From a technical perspective, the current technical solution can not only meet the real-time video communication between students and coaches, and provide one-to-one or one-to-many fitness guidance, but students can also learn by themselves by following the teaching video. The current new processors all have independent NPUs that are specifically designed to run AI models. With the help of artificial intelligence technology, human body movements can be accurately identified. Compared with standard movements, it is easy to judge the completion of fitness movements. At present, yoga, fat-burning gymnastics, dumbbells, etc. are relatively conventional training programs. These programs pursue the completion of movements and do not require external assistance. They are ideal AI training programs. But the ecology of fitness mirrors is far more than that. With the help of sensor technology, the physical condition of students can be accurately obtained, such as body fat, heartbeat, body temperature, body surface condition, facial expressions and other signs, which is convenient for formulating more scientific training plans and achieving good fitness results. In the era of big data, companies that can accurately obtain these user data can make great achievements in the field of big health.   The role of smart mirror in smart home is more like a smart screen. It relies on the necessity of the family - mirror. Although it is not used frequently, it is a necessity. Such products can complete the linkage well as a part of smart home. Many people call this kind of product a magic mirror. In fact, it is not because of how advanced the technology is, but more because of the huge contrast between smart mirror and traditional mirror, which gives people a magical feeling. The magic mirror is a mirror with a built-in operating system and touch screen, which is the same type of product as the smart screen. It can be connected to the Internet, run various software, and interact with other electronic devices. And as a necessity of life, the mirror can be used as a network entrance to discover the deep value.   Smart full-body mirror products seem to have many subcategories, but the technical platform behind them is similar, a touch screen, an operating system and peripheral sensors. Taking Rockchip's RK3568 as an example, it can support a variety of point screen interfaces, such as HDMI, eDP, LVDS, MIPI, RGB, V-by-one, etc., and supports multi-screen display and multi-screen touch to meet the display needs of smart mirrors. In terms of operating system, RK3568 not only supports Android 11, but also supports Linux systems such as Debian 10, Ubuntu Core, Yocto, etc., and there will be opportunities to support domestic UOS and Hongmeng OS in the future. It is convenient for developers to develop software and realize various functions. RK3568 also has a wealth of expansion interfaces to meet the complex needs of various cameras, array microphones, sensors, wireless communications, etc. ScenSmart can customize products for customers according to actual scenarios, which can meet the usage needs in different scenarios.   Neardi can provide customers with mature and stable technical solutions, reduce trial and error costs, shorten the R&D cycle, and help customers quickly implement smart full-body mirror projects. But this does not mean that smart mirrors are a low-threshold industry. Whether it is smart home, fitness mirror, or beauty mirror, it is an industry market and requires channel sales capabilities. At present, fitting mirrors and beauty mirrors are mainly for brand retailers, and the modeling of various clothes and samples takes a lot of time, and the initial investment is relatively large. The current prices of smart fitness mirrors are relatively high, and it is difficult to acquire customers. Fitness training is a niche market that requires professional coaches to participate in this ecosystem. The introduction and management of coaches are also a big challenge. Most smart home products belong to the soft furnishing market of home decoration. Although they can be shipped quickly in batches by connecting with developers or contractors, the financial pressure is often relatively large. Regardless of the product form, a large amount of user data will eventually be generated. The use of this data is a key issue that needs to be considered.   The smart mirror market is still a blue ocean and is in a period of rapid growth. There are no oligopolies in the industry. Due to the professional limitations of some industries, it is difficult for large Internet manufacturers to quickly enter. Although it is a niche market, it is a rigid demand market. It is suitable for start-ups, but more suitable for the upgrading and transformation of existing companies in the industry.
Lastest company cases about High-performance industry application visual host LPA3399Pro, performance summary!
2024/12/13
High-performance industry application visual host LPA3399Pro, performance summary!
1. Product Description LPA3399Pro visual embedded computer is a portable computing host developed based on the Rockchip RK3399Pro platform, targeting scenarios that require a large number of visual computing. It has a built-in NPU computing unit with 3.0TOPS computing power and supports multiple algorithm models. This product is a basic device for AI scenarios with rich hardware interfaces. Users only need to transplant the algorithm to the platform to quickly implement the product. LPA3399Pro visual embedded computer supports 5-way AHD camera input and multiple depth camera input, which is suitable for machine vision and ADAS products. AHD cameras are widely used in the automotive field, using coaxial transmission, with a distance of up to tens of meters, and using industry-standard aviation plug connection, which is stable, reliable, and easy to install. The LPA3399Pro visual embedded computer integrates 802.11a/b/g/n/ac dual-band WIFI, BT5.0 low-power Bluetooth, GPS+BD dual-mode navigation, seven-mode full-network 4G communication, and 9-axis motion sensor; it supports multiple communication interfaces, including RS232, RS485, CAN, 1000M Ethernet, etc. Rich interfaces allow users to develop various excellent products.   2. Functional Overview Multi-channel camera access solution, 5-channel high-definition AHD camera, and 4-channel USB camera solution provide an expansion basis for various application scenarios; High-performance NPU AI platform, up to 3.0TOPS computing power, multi-model compatibility, and multi-type framework support provides a strong computing power foundation for various AI applications; Automotive-grade power protection front end can withstand a wide voltage input of -40V~60V and an operating voltage range of 9V~36V, overvoltage, and Undervoltage protection, overcurrent and overtemperature protection, ignition load dumping protection, etc., can be directly connected to various 12V or 24V battery power supply systems, providing a safe foundation for artificial intelligence application scenarios of various vehicles; Rich and diverse Such functional integration, 2G/3G/4G full network data transmission, GPS/BD dual-mode positioning, 2.4G/5G dual-band WIFI, BT5.0 Bluetooth connection, 9-axis motion tracking sensor, can meet the application development in various types of scenarios, and provide the fastest prototype basis for the evaluation and display of new products and new applications; Highly reliable peripheral interface, electrical isolation, electrostatic protection, electromagnetic shielding, anti-vibration, and anti-detachment, provide a solid connection foundation for various industrial control scenarios in harsh environments; Inefficient passive heat dissipation design, a large area of ​​aluminum alloy heat dissipation fins directly guides the internal heat of the CPU to the external environment, providing a reliable environmental foundation for the long life, high efficiency, and continuous stable operation of the system.   3. Application Cases Widely used in smart retail, AI smart robots, ADAS/DMS, smart security, edge computer terminals, machine vision, and other scenarios.
Lastest company cases about LBA3588S: Innovative applications and multi-field solutions for intelligent computers
2024/12/13
LBA3588S: Innovative applications and multi-field solutions for intelligent computers
Neardi Technology's LBA3588 embedded computer, with its powerful NPU processing capabilities and rich interface support, provides innovative solutions for multiple industries. This article will focus on the application of LBA3588 in smart retail, multi-channel MIPI camera access, and multi-screen display, showing its important role in improving business intelligence and personalized services. With the continuous advancement of technology, smart computers are increasingly used in all walks of life. The LBA3588 embedded computer launched by Lindi Technology, with its advanced NPU technology and diversified interface support, provides strong technical support and personalized solutions for smart retail, monitoring, medical care, transportation, and other fields.   1. Innovative applications of smart retail The application of the LBA3588 embedded computer in the field of smart retail is mainly reflected in product identification, crowd counting, and intelligent recommendation. Through the powerful processing power of NPU, LBA3588 can quickly and accurately identify products, and at the same time conduct real-time statistics and behavior analysis of the flow of people in shopping malls or stores, providing merchants with decision support for optimizing product display and service processes.   2. Diversified applications of multi-channel MIPI camera access LBA3588 supports multi-channel MIPI camera access, which makes it possible to build multi-camera monitoring systems, stereoscopic vision systems, multi-view image processing, etc. Whether it is security monitoring, traffic flow monitoring, or medical imaging diagnosis, LBA3588 can provide clear and real-time image processing capabilities to meet the needs of different scenarios.   3. Flexible application of multi-screen heterogeneous display LBA3588 supports multiple interfaces such as HDMI, LVDS, EDP, USB, and DP, which can realize multi-screen heterogeneous displays, that is, display different content on multiple displays at the same time or expand the display space. Whether it is a digital billboard, monitoring center, conference room, or exhibition, LBA3588 can provide flexible display solutions to enhance the attractiveness and efficiency of information display. Powerful NPU processing capability: The NPU equipped with LBA3588 can efficiently process complex data and meet high-load computing needs. Rich interface support: The multi-interface design enables LBA3588 to flexibly adapt to various device connection requirements, including sensors, cameras, displays, etc. High system integration: The highly integrated design of LBA3588 reduces the dependence on external devices, simplifies system configuration, and improves system stability and reliability. Wide range of application scenarios: Whether it is industrial automation, IoT device connection, GPS positioning, or communication device connection, LBA3588 can provide customized solutions. Neardi Technology's LBA3588 embedded computer, with its excellent performance and wide range of application scenarios, has injected new vitality into the development of the intelligent era. With the continuous advancement of technology and the deepening of application, LBA3588 will show its unique value and potential in more fields.
Lastest company cases about LPB3588 Embedded Computer - Industrial Control Solution!
2024/12/13
LPB3588 Embedded Computer - Industrial Control Solution!
1. Product Description LPB3588 Embedded Computer is a smart host carefully designed based on the Rockchip RK3588 chip platform; the body adopts an all-aluminum fanless design, and the innovative structural combination inside the body allows the key CPU and PMU and other major heat-generating components to directly conduct heat to the external aluminum shell so that the entire body shell acts as a heat dissipation material, which can withstand more stringent working environments and is widely used in a variety of industrial scenarios.   2. Interface Introduction LPB3588 Embedded Computer has 3 USB3.0 HOST onboard and one full-function type-C interface, which can be connected to multiple USB cameras; 2 mini-PCIe interfaces onboard, in addition to external 4G or 5G modules, can also be connected to our company's mini-PCIe interface NPU computing card developed based on RK1808, and combined with multiple cameras to form an artificial intelligence visual computing host that supports up to 12TOPS computing power. The LPB3588 Embedded Computer supports dual-band WIFI6, BT5.0, 2-way 1000M Ethernet, and supports the expansion of 4G or 5G modules; supports 2-way high-speed UART, 4-way RS232, 1-way RS485, 2-way CANBUS, and other common communication interfaces. The LPB3588 Embedded Computer supports 3-way HDMI output, 1-way DP output, 1-way dual-channel LVDS interface and backlight control, and touch screen interface, supports 1-way HDMI input, supports audio input and output, can be connected to an external 10W@8Ω stereo speaker, built-in M.2 nvme 2280 solid-state drive interface, can be connected to a variety of external displays and supports multi-screen display. The LPB3588 smart host supports 4-way relay control, including 4 groups of normally open, normally closed and COM ports; supports 4-way switch input, each with optocoupler isolation, supports active input (up to 36V) or passive input; supports 4-way analog input, supports 0~16V voltage detection or 4~20mA current detection, and can be connected to a variety of industrial transmitters. The LPB3588 Embedded Computer supports Android, buildroot, Debian, and Ubuntu systems, has the advantages of high performance, high reliability, and high scalability, and opens the system source code to users. Users can carry out secondary development and customization based on this product. Our company provides all-around technical support for developers and corporate users, enabling them to efficiently complete research and development work and greatly shorten the product research and development and mass production cycle.   3. Functional Overview Power supply: DC 9-36V, supports overvoltage, overcurrent, surge protection, and reverse connection protection; USB interface: 3 USB3.0 HOST, 1 full-function type-C interface; NPU expansion: can be used with our RK1808 AI computing card, up to 12TOPS computing power; Multi-screen display: 3 HDMI2.0 outputs, one dual-channel LVDS display interface, 1 DP output interface, can support multi-screen display; Video input: 1 HDMI input, up to 4K@30fps resolution; Audio input and output: φ3.5 audio output and microphone input, can be connected to an external 10W@8Ω stereo speaker; Network communication: 2 Gigabit Ethernet, BT5.0, dual-band WIFI6, support 802.11 a/b/g/n/ac/ax protocol, optional 4G module or 5G module; Storage expansion: built-in M.2 M-KEY interface and STAT3.0 interface, support expansion of SSD and hard disk; Data communication: 2 high-speed UART, 4 RS232, 1 RS485, 2 CANBUS interfaces; Industrial control: 4 relay control, 4 switch input, 4 analog input; System support: support Android, buildroot, Debian, Ubuntu and other OS;    
Event
Our Latest News
Lastest company news about From Algorithm Logic to Chip - side Deployment: The Evolution of YOLO Object Detection and Rockchip's Practice
From Algorithm Logic to Chip - side Deployment: The Evolution of YOLO Object Detection and Rockchip's Practice
.gtr-container-yolo789 { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 16px; box-sizing: border-box; overflow-x: hidden; } .gtr-container-yolo789 p { font-size: 14px; margin-top: 0; margin-bottom: 1em; text-align: left !important; } .gtr-container-yolo789 div { margin-top: 0; margin-bottom: 1em; } .gtr-container-yolo789 .gtr-heading-1 { font-size: 18px; font-weight: bold; margin-top: 2em; margin-bottom: 1em; color: #0056b3; } .gtr-container-yolo789 .gtr-heading-2 { font-size: 18px; font-weight: bold; margin-top: 1.5em; margin-bottom: 0.8em; color: #0056b3; } .gtr-container-yolo789 strong { font-weight: bold; } .gtr-container-yolo789 img { margin-top: 1em; margin-bottom: 1em; } .gtr-container-yolo789 ul { list-style: none !important; margin: 0; padding: 0; margin-bottom: 1em; } .gtr-container-yolo789 ul li { position: relative; padding-left: 1.5em; margin-bottom: 0.5em; font-size: 14px; text-align: left !important; list-style: none !important; } .gtr-container-yolo789 ul li::before { content: "•" !important; position: absolute !important; left: 0 !important; color: #0056b3; font-size: 1.2em; line-height: 1; } @media (min-width: 768px) { .gtr-container-yolo789 { padding: 24px; max-width: 960px; margin: 0 auto; } } Standing at a crossroads, you only need a fleeting glance for your brain to instantly label everything in your field of vision: that red bus is pulling into the station, the child on the sidewalk is running, and a food delivery scooter is speeding by on the side. This almost intuitive reaction was once extremely difficult for computers to learn. That was until YOLO came along. You Only Look Once—at the moment an image is captured, classification and localization are completed simultaneously. It allowed object detection to bid farewell to exhaustive searches and, just like human intuition, truly endowed machines with the essence of real-time thinking. Visual "Intuition": The Regression Philosophy of YOLO Before the birth of YOLO, the field of computer vision had long been dominated by the two-stage architecture. Back then, to detect an object, an algorithm first had to extract thousands of region proposals, and then classify them one by one. The genius of YOLO lies in that it completely overturned this cumbersome "proposal-then-verification" process and reconstructed object detection from a classification task into an end-to-end regression problem. When you input an image into the YOLO network, it cuts the Gordian knot by directly dividing the image into an S*S grid. Each grid is not only a slice of the image, but also a feature point in the network output tensor. Integrated Tensor Prediction: Each grid directly predicts the coordinate information (x, y, w, h) of multiple bounding boxes, as well as a confidence score indicating whether an object is present here. Parallel Classification and Localization: While predicting coordinates, each grid also calculates a set of class probabilities. This means that localization and classification are completed in a fully parallel manner within the output of the same layer of the neural network. Global Feature Coupling: Thanks to the end-to-end design of the network, it has access to the global information of the entire image when making decisions. Compared with traditional algorithms that only focus on local region proposals, YOLO’s such "big-picture view" enables it to identify background noise more accurately, making it less likely to misclassify irregularly shaped clouds as birds. YOLO in Industrial AI Vision Many people think AI is distant, but honestly, YOLO has long been "competing fiercely" in corners unseen by us. Smart Construction Sites: In tunnel construction sites filled with dust or with extremely poor lighting, YOLOv9 demonstrates extremely strong feature extraction capabilities. Behavior Compliance Detection: It can not only identify the presence or absence of safety helmets and reflective vests, but also determine whether they are worn properly (e.g., whether the helmet strap is fastened, or the zipper is fully zipped) through detailed features. High-concurrency Processing: It supports large-scale real-time detection of over 50 people per frame. Combined with infrared imaging technology, it realizes the leap from "manual monitoring" to "24/7 automatic early warning". Urban Governance: Urban management and comprehensive governance scenarios impose high requirements on the anti-interference capability of algorithms. Static Governance: By combining historical image comparison and semantic segmentation, the system can accurately identify newly-built illegal structures, garbage accumulation or road occupation for business, and even automatically quantify the area and volume of violations. Dynamic Security: Based on pose recognition (OpenPose/YOLO-Pose), the system can sensitively capture abnormal behaviors such as "person falling to the ground" and link with emergency medical systems. In dense crowds, it uses density clustering algorithm (DBSCAN) to monitor crowd density in real time and prevent stampede risks. Power Inspection: Multimodal Fusion in high-risk areas such as underground cable tunnels or high-voltage transmission towers: By fusing lidar point cloud and infrared thermal imaging, it can conduct non-contact detection of transformer abnormal heating, arrester leakage current or tower tilt (with an accuracy of 0.1°) from a distance of 30 meters. Automatic Defect Judgment: For minor hidden dangers such as cable damage and bracket corrosion, the recognition accuracy exceeds 92%, which greatly improves operation and maintenance efficiency and ensures personnel safety. Forest Fire Prevention: For large-area, irregularly-shaped smoke and fire detection, YOLO demonstrates ultra-fast response capability. Accurate Smoke and Fire Identification: Combining image features and thermal radiation data, it can distinguish wildfires, campfires or farmland burning within 2 seconds, with extremely strong anti-interference capability against clouds and vegetation shadows. Situation Awareness: Integrating GIS geographic information and random forest model, the system can not only detect fire, but also predict the spread trend based on wind speed and terrain, providing visual maps for on-site scheduling. Ultimate Computing Power Optimization for RK3588/RK3576 Honestly, benchmarking on a graphics card is just a warm-up. What truly enables YOLO to be deployed and implemented is porting it into chip-sized SoCs like Rockchip’s RK3588 or RK3576. This is not just a simple code migration, but an "extreme exploitation" of computing power, bandwidth, and memory. To achieve millisecond-level object detection on these SoC platforms, the following steps are typically required: "Translate" the Model: The chip’s NPU (Neural Processing Unit) has its own specifications and cannot interpret PyTorch’s native .pt training files. Using RKNN-Toolkit2, the model is converted to ONNX format, then disassembled and reconstructed into the .rknn format that the chip can understand—watching complex operators be rearranged into the computation paths favored by the NPU. "Slim Down" via Compression: Native FP32 (32-bit floating-point) models have an enormous number of parameters, imposing a heavy burden on the bandwidth and storage of embedded chips. Quantization algorithms compress weights and activations from 32-bit to 8-bit, reducing memory usage by a full 75%. This not only alleviates DDR bandwidth pressure but also effectively lowers computational power consumption. "Data Transfer" Optimization: Even if the model is fast enough, the NPU will still "sit idle" if the CPU is busy moving video streams in memory. To avoid wasting a single millisecond, DMA-BUF zero-copy technology is used to enable video stream data sharing in video memory among the ISP, GPU, and NPU, completely eliminating CPU copy overhead. Combined with parallel logic for asynchronous inference, the next frame is already queued for processing while the current frame is still undergoing convolution operations. This seamless coordination is what allows real-time video streams to run smoothly on the chip. Which YOLO Version Is Your "Go-to Choice"? When deploying on embedded devices, the choice of version is not simply about "chasing the latest"; instead, it requires balancing computing power overhead, operator compatibility, and the accuracy requirements of specific tasks. Engineering Benchmark: YOLOv5 As the version with the most mature ecosystem, YOLOv5 boasts extremely high stability and deployment coverage in the industrial sector. Technical Features: Adopts an Anchor-based mechanism with a flexible architecture (available in multiple scales from Nano to Huge). Deployment Advantages: Rockchip’s RKNN toolchain provides the most comprehensive support for it with excellent operator compatibility, making it the first choice for pursuing rapid project deployment and high stability. All-round Architecture: YOLOv8 YOLOv8 introduces an Anchor-free mechanism, achieving a unified architecture for detection, segmentation, and pose estimation (Pose). Technical Features: Utilizes the C2f module to enhance feature flow and improves regression accuracy through a Decoupled Head. Deployment Advantages: It strikes an excellent balance between accuracy and speed when handling multi-task parallelism (e.g., simultaneous object detection and human keypoint extraction), making it the mainstream solution on high-performance SoCs such as RK3588 at present. End-to-End Performance Leap: YOLOv10 YOLOv10 has made breakthrough progress in addressing the post-processing bottleneck in real-time detection. Technical Features: Introduces an NMS-free (Non-Maximum Suppression-free) strategy, eliminating non-determinism in inference latency through alignment design of one-to-many and one-to-one matching. Deployment Advantages: At the edge, NMS often accounts for a significant portion of CPU time consumption. YOLOv10 completely resolves this performance loss, enabling the inference process to exhibit better linear stability on SoC hardware. High-Precision Evolution: YOLOv11 and VajraV1 These represent the latest technological iterations for complex scenarios, focusing on capturing fine-grained features. Technical Features: YOLOv11 optimizes lightweight attention mechanisms (C3k2/C2PSA), while VajraV1 is deeply customized for edge devices on this basis. By widening core convolutions and adopting low-rank guided design, it significantly improves robustness in complex environments. Deployment Advantages: It has distinct advantages in dense object detection, occlusion scenarios, and high-precision pose perception (e.g., details of safety helmet wearing, fine-grained action recognition), representing the highest upper limit of detection accuracy achievable by the YOLO family on embedded devices to date. The evolution of algorithms has lowered the threshold for perception, while the popularization of chips has expanded the boundaries of intelligence.
Lastest company news about Stop Making Blind Choices! An Ultimate Guide to Camera Interface Selection: From MIPI to GMSL
Stop Making Blind Choices! An Ultimate Guide to Camera Interface Selection: From MIPI to GMSL
.gtr-container-k7p2x9 { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 1em; box-sizing: border-box; overflow-wrap: break-word; } .gtr-container-k7p2x9 p { font-size: 14px; margin-bottom: 1em; text-align: left !important; color: #333; } .gtr-container-k7p2x9 strong { font-weight: bold; color: #222; } .gtr-container-k7p2x9 .gtr-title-main-k7p2x9 { font-size: 18px; font-weight: bold; margin-top: 2em; margin-bottom: 1em; color: #0056b3; text-align: left; } .gtr-container-k7p2x9 .gtr-title-sub-k7p2x9 { font-size: 16px; font-weight: bold; margin-top: 1.5em; margin-bottom: 0.8em; color: #333; text-align: left; } .gtr-container-k7p2x9 .gtr-image-wrapper-k7p2x9 { margin-top: 1.5em; margin-bottom: 1.5em; text-align: center; } .gtr-container-k7p2x9 .gtr-image-wrapper-k7p2x9 img { max-width: 100%; height: auto; display: inline-block; vertical-align: middle; } .gtr-container-k7p2x9 .gtr-callout-k7p2x9 { padding: 1em; margin-top: 1.5em; margin-bottom: 1.5em; border-left: 4px solid #007bff; background-color: #f0f8ff; color: #333; font-size: 14px; text-align: left; } @media (min-width: 768px) { .gtr-container-k7p2x9 { max-width: 960px; margin: 0 auto; padding: 2em; } } To be honest, for friends working on embedded or AI projects, when they see a table full of odd-shaped camera interfaces for the first time, their inner thoughts are probably: "They’re all just for transmitting images—do they really need to be so diverse?" Some come with colorful flat cables, some look like the old coaxial cables in elevators, and others even have an Ethernet cable attached. In fact, this is not manufacturers deliberately making things difficult. The choice of interface essentially boils down to a trade-off between four factors: bandwidth, distance, latency and cost. We won’t waste time on textbook jargon today—let’s cut to the chase and talk about how these interfaces actually work. The Trade-off Between Ultimate Speed and Power Consumption: Why Do Mobile Phone Chips Only Support MIPI? DVP (Digital Video Port): The Retired "Veteran" DVP is like an old-fashioned "side-by-side boulevard", consisting of 8 to 16 data lines, plus a clock line and synchronization signal lines. It adopts parallel transmission, where data is transmitted in an orderly manner just like a formation of people marching in queue. Advantages: Its biggest merit lies in simplicity and straightforwardness. It transmits raw level signals without the need for complex encoding and decoding logic. A simple driver is sufficient to make it work, and even low-end microcontrollers can easily handle it. Disadvantages: Its performance ceiling is rather low. With multiple lines arranged in parallel, when the transmission speed increases (i.e., frequency rises), severe crosstalk and timing skew will occur between the lines. Once the frequency goes up, the screen will be filled with snowflake-like noise. Therefore, it has a very narrow bandwidth and is basically obsolete in the high-definition era. Application Scenarios: Nowadays, DVP has basically stepped back to a secondary role, mainly being used in barcode scanners, low-pixel toys, or simple sensor data acquisition scenarios. If your project only requires scanning QR codes, DVP is still the most cost-effective choice. MIPI CSI: The Well-deserved "Overlord of Consumer Electronics" Why can mobile phones shoot 4K or even 8K videos? All thanks to MIPI. It adopts the low-swing differential transmission mode of MIPI D-PHY/C-PHY. You can think of it as "a type of differential signal that is more delicate than LVDS but more efficient". It is no longer like an ordinary formation, but rather groups of highly coordinated "elite special forces" twisted around each other. It boasts extremely strong anti-interference capability and incredibly high data transmission efficiency. For example, all models of our regular Neardi development boards are basically equipped with MIPI camera interfaces as standard. LKB3576 Development Board Advantages: Extremely high bandwidth combined with ultra-low power consumption. It can transmit an astonishing volume of data with minimal power loss. More importantly, it interfaces directly with the ISP (Image Signal Processor) inside the SoC. This means that as soon as the image comes in, the ISP can immediately take over processing tasks (color grading, denoising, sharpening) without involving the CPU at all. Disadvantages: It is truly delicate. The transmission distance usually cannot exceed 30 centimeters; the signal will be lost if the PCB traces are routed even a little too far. Moreover, MIPI debugging is a nightmare for all developers—you need to handle complex D-PHY or C-PHY physical layer logic, and also optimize those hair-pulling image quality parameter files. Application Scenarios: It is the core interface for mobile phones, tablets, and embedded AI boxes (RK3576/Raspberry Pi). If you are working on high-real-time face recognition or obstacle avoidance algorithms, MIPI is usually the most professional and efficient choice for on-board direct connection scenarios. Pro Tip: During on-board design, you will find that MIPI cameras are usually connected via thin FPC cables. Never underestimate such cables—their folding endurance and electromagnetic interference (EMI) resistance design directly determine the stability of your video stream. What Should You Do When the Camera Is Over 5 Meters Away from the Host? USB (UVC Protocol): The Versatile "Social Butterfly" USB cameras rely on the UVC (USB Video Class) protocol, enabling plug-and-play image output. Most developers’ Neardi RK3588 integrated devices usually come with multiple reserved USB 3.0 interfaces, and the system layer has already completed UVC driver adaptation. Even if you don’t have an expensive MIPI module at hand, you can directly connect a USB camera to the Neardi board and still run algorithms smoothly. LPB3588 Intelligent Computer Advantages: Plug-and-play (driver-free) functionality is its biggest killer feature. For algorithm verification and demo presentations in the lab, you can get images in 5 minutes, making it a lifesaver for developers. Furthermore, it features extremely low cost—you can use any camera bought easily from a local store. Disadvantages: Its convenience comes at the cost of CPU resources. The raw image data transmitted via USB is excessively large; USB 2.0 simply cannot handle it. Therefore, the camera will first compress the frames using MJPEG or H.264 internally. As a result, your CPU has to allocate a significant portion of its computing power to decompression. Many beginners complain that running YOLO models is too slow—actually, the CPU is already strained from decoding frames before it even starts model inference. If the SoC supports VPU hardware decoding and the corresponding drivers are properly configured, the CPU load from USB cameras can be significantly reduced, but the overall latency still cannot match that of MIPI. Additionally, the compression and decompression process introduces a perceptible latency ranging from tens to hundreds of milliseconds. Application Scenarios: Video conferencing, external computer cameras, algorithm demos in the lab, and simple industrial quality inspection. If your real-time performance requirements are not extremely strict and the host has surplus computing power, USB is a perfectly viable choice. RJ45 (Ethernet Port): The "Cornerstone" of Long-Distance Deployment When a camera needs to be installed on the ceiling of a cafeteria or even at a road intersection several kilometers away, an Ethernet cable is almost the most universal and mature choice. To meet such high-concurrency, long-distance monitoring needs, hardware manufacturers have spared no effort in interface configuration. Take Neardi's LPM3588 Intelligent Computer as an example—tailor-made for the NVR (Network Video Recorder) market, it boasts extremely powerful configurations: it supports up to 5 Gigabit Ethernet (1000M) ports and 1 Fast Ethernet (100M) port. This design is simply built to "feed" multiple high-definition network cameras; even if 6 or more channels of high-definition video streams come in simultaneously, the Gigabit bandwidth can easily handle them without any bottlenecks. LPM3588 NVR Computer Advantages: Extremely long transmission distance (100-meter class), which can be extended indefinitely via switches. Most popular among developers is its PoE support—one Ethernet cable handles both power supply and data transmission. The multi-port design like that of the LPM3588 eliminates the need for an external switch, greatly simplifying the wiring complexity of NVR systems. Disadvantages: Relatively high latency. Because images must go through compression, network packaging, transmission, and then decompression. Compared to MIPI's native real-time performance, Ethernet cameras are slightly slower in response speed. Application Scenarios: Security monitoring, smart cities, people flow statistics in cafeterias/supermarkets, and cross-regional remote networking. Simply put, almost all cameras installed on walls or utility poles use this interface. Developer Pitfall Avoidance Guide: If you are working on a project with RK3576 and encounter lag with USB cameras, try lowering the resolution or frame rate, or check if you can call the hardware decoding unit (VPU) to free up the CPU. If your project requires "instant feedback", decisively abandon Ethernet and USB, and switch back to the MIPI interface. Special Industries: Pursuing Ultimate "Reliability and Long-Distance Transmission" In factory workshops, mines, or high-speed moving vehicles, ordinary interfaces can barely last half a day. Interfaces here must solve two ultimate problems: how to maintain clean signals in noisy electromagnetic environments? And how to transmit signals both far and fast? AHD (Analog High Definition): The "Veteran Long-Distance Runner" of the Industrial World Many people think "analog signals" should have been consigned to museums long ago, but AHD has forcibly carved out a niche in the digital age. It uses high-frequency carrier technology to squeeze high-definition video signals into old-fashioned coaxial cables. What's more, it is extremely rugged. In high-vibration, strong-interference environments like special vehicles (such as excavators, dump trucks, and buses), complex digital interfaces are prone to screen glitches due to loosening or electromagnetic waves. Neardi's LPA3588 development board is designed specifically for such scenarios, supporting up to 8 channels of 1080P AHD camera input. Imagine a sanitation or logistics vehicle equipped with 8 cameras around its front, rear, left, right, top, and bottom— the LPA3588 can stably receive all 8 channels of signals, and with the RK3588's NPU, perform full-range perimeter anti-collision prediction. This is truly "special forces" level performance. LPA3588 Vehicle Control Host Advantages: Rugged, affordable, and long transmission distance. Its requirements for cables are incredibly low—any coaxial cable can stably transmit signals for 100 to 200 meters, and even farther under specific conditions. Additionally, its signal transmission is real-time and uncompressed, without the latency associated with Ethernet cables. For harsh environments with limited budgets that require long-distance real-time monitoring (such as construction crane footage), it is the undisputed champion. Disadvantages: Does not support "two-way communication". AHD mainly transmits video signals unidirectionally—there’s no way to send complex commands to the camera (such as in-depth parameter adjustment) through this cable. Moreover, the upper limit of image quality is restricted by the analog standard, making it difficult to achieve the purity of digital signals, with subtle noise visible on large screens. Application Scenarios: Surveillance upgrades in old residential areas, rearview and reverse images for buses/trucks, and even some low-cost underground operation equipment. GMSL (Gigabit Multimedia Serial Link) / SerDes: The "Lifeline" of Autonomous Driving This is currently the "top-tier" technology in the automotive field. Imagine an autonomous driving vehicle with cameras mounted at the front, while the main control computer is in the trunk—separated by more than ten meters and surrounded by interference from various high-voltage motors. MIPI can’t reach that far, USB is prone to crashes, and Ethernet has high latency. Thus, SerDes (Serializer/Deserializer) technology came into being. GMSL is a standout among them: it "packages fragile MIPI signals into iron blocks" (serialization) at the transmitting end, sends them through robust shielded cables, and then "unpacks and restores" them to MIPI at the receiving end. GMSL Vision Host Advantages: All-round and high-performance. It achieves true "four-in-one over a single cable": one cable handles video, audio, two-way control signals (I2C/UART), and power (PoC) simultaneously. It boasts extremely high bandwidth (supporting 8-megapixel, 90fps), with end-to-end latency controllable at the millisecond level—far lower than USB or Ethernet solutions—and complies with strict automotive-grade standards. Disadvantages: Expensive and closed ecosystem. Its price is often ten to a hundred times that of USB solutions. Ordinary developers can hardly obtain its complete protocol manual, and debugging usually requires expensive specialized equipment. Application Scenarios: Autonomous driving vehicles at L2/L3/L4 levels, advanced surgical robots, and high-end mobile warehouse robots (AGVs). It is the only choice for high-end mobile devices involving "life-or-death situations" or "ultra-low-latency real-time responses". There is no "best" interface—only the most suitable one for the scenario. Use USB for lab demos, MIPI for high-performance products, RJ45 for remote monitoring, and grit your teeth for GMSL when it comes to automotive or high-end automation applications.
Lastest company news about Neardi Pi 4-3588: Unleashing 8K Ultra-Fast Intelligence, Empowering the New Era of Enterprise-Grade Edge Computing
Neardi Pi 4-3588: Unleashing 8K Ultra-Fast Intelligence, Empowering the New Era of Enterprise-Grade Edge Computing
.gtr-container-q2w3e4 { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 16px; max-width: 100%; box-sizing: border-box; } .gtr-container-q2w3e4 p { margin: 0 0 16px 0; padding: 0; text-align: left; font-size: 14px; word-break: normal; overflow-wrap: normal; } .gtr-container-q2w3e4 .gtr-heading { font-size: 18px; font-weight: bold; margin: 24px 0 16px 0; color: #0056b3; } .gtr-container-q2w3e4 ul { list-style: none !important; margin: 0 0 16px 0; padding: 0; } .gtr-container-q2w3e4 ul li { position: relative; padding-left: 20px; margin-bottom: 12px; font-size: 14px; text-align: left; list-style: none !important; } .gtr-container-q2w3e4 ul li::before { content: "•" !important; position: absolute !important; left: 0 !important; color: #0056b3; font-size: 18px; line-height: 1; } .gtr-container-q2w3e4 ul li p { margin: 0; padding: 0; display: inline; list-style: none !important; } .gtr-container-q2w3e4 strong { font-weight: bold; } .gtr-container-q2w3e4 img { display: block; margin: 16px 0; height: auto; /* Per instructions, no max-width: 100% or width: auto; to preserve original width attribute */ /* This means images with width="800" will overflow on screens smaller than 800px */ } @media (min-width: 768px) { .gtr-container-q2w3e4 { padding: 24px; } .gtr-container-q2w3e4 .gtr-heading { margin: 32px 0 20px 0; } } In today's rapidly evolving landscape of AIoT and edge computing, developers and enterprises are placing higher demands on the performance, stability, and scalability of core hardware. The Neardi Pi 4-3588 development board makes its official debut — it is not only an open-source hardware platform but also a powerful engine for you to transform cutting-edge algorithms into mass-produced products. Peak Performance: Octa-core Architecture, Flagship Power The Neardi Pi 4-3588 is equipped with Rockchip's flagship RK3588 chip, which adopts an advanced 8nm process, perfectly combining high performance with extreme power consumption control. Robust Processor: It features a quad-core Cortex-A76 and quad-core Cortex-A55 big.LITTLE architecture, supporting dynamic task allocation to easily handle complex computing scenarios. Top-tier Graphics Processing: It integrates the ARM Mali-G610 MP4 GPU, fully supporting mainstream graphics interfaces such as OpenGL ES 3.2 and Vulkan 1.2, meeting the needs of high-precision visual quality. Surging AI Computing Power: It has an integrated NPU with up to 6TOPS computing power, supporting INT4/INT8/INT16 mixed operations, perfectly accelerating model inference of frameworks like TensorFlow, PyTorch, and Caffe. Visual Feast: 8K Encoding and Decoding with Ultimate Display The Neardi Pi 4-3588 is designed for visual applications. It supports 8K@60fps H.265/VP9 hardware decoding and 4K@60fps encoding, combined with HDR processing, delivering cinema-level visual quality. Multi-screen Interconnection: It features on-board HDMI output (supporting up to 8K@30fps or 4K@120fps) and provides a MIPI-DSI interface, facilitating multi-screen heterogeneous display applications. Multi-channel Acquisition: It is equipped with three MIPI-CSI camera interfaces, providing a solid hardware foundation for machine vision and multi-camera stitching. Comprehensive Connectivity: Rich Industrial-grade Interfaces As an "enterprise-grade" platform, the Neardi Pi 4-3588 does not compromise on scalability, providing interfaces that cover the vast majority of industrial scenarios: High-speed Storage: It supports external NVMe protocol M.2 Key M (SSD 2242) storage expansion. Full-network Communication: It features dual gigabit Ethernet ports, dual-band WiFi 6, Bluetooth 5.4, and a reserved mini-PCIe interface to support 4G/5G modules. Full Industrial Protocol Coverage: It has on-board CAN FD, RS485, UART, I2C, SPI, and other commonly used communication interfaces, seamlessly connecting to various sensors and industrial peripherals. Developer-friendly: Full-stack Open Source, Rapid Mass Production We fully understand the importance of the development cycle. The Neardi Pi 4-3588 provides not only hardware but also an ecosystem: Multi-system Support: It perfectly matches with Android, Buildroot, Debian, and Ubuntu systems. Open-source Code: It provides users with open-source system code, complete WIKI documentation, kernel drivers, and flashing tools. Comprehensive Support: Lindi Technology offers in-depth support from technical consulting to custom development, helping you significantly shorten the cycle from prototype to mass production. Industrial-grade Quality: It offers commercial-grade (-20℃~75℃) and industrial-grade (-40℃~85℃) versions to meet the stable operation requirements in harsh environments. Wide Range of Application Scenarios Thanks to its high performance and reliability, the Neardi Pi 4-3588 has been widely applied in: Artificial Intelligence and Vision: Object recognition, machine vision, security surveillance. Smart Display: Smart tablets, commercial display screens. Industry and Transportation: Industrial control, energy power, in-vehicle terminals, smart logistics. Relying on the powerful chip performance of Rockchip RK3588 and Lindi Technology's profound industry customization experience, the excellent performance comes from the refinement of every detail; rapid mass production comes from a mature and stable ecosystem support. The Neardi Pi 4-3588 is now officially on sale, with a complete SDK, technical documentation, and expert-level technical support ready.
Lastest company news about An In-Depth Interpretation of RK3588's 6TOPS Bottleneck and the Truth About NPU Computing Power
An In-Depth Interpretation of RK3588's 6TOPS Bottleneck and the Truth About NPU Computing Power
.gtr-container-7f3e9a { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 15px; box-sizing: border-box; overflow-wrap: break-word; word-break: normal; } .gtr-container-7f3e9a .gtr-heading-main { font-size: 18px; font-weight: bold; margin-top: 2em; margin-bottom: 1em; color: #0056b3; text-align: left; } .gtr-container-7f3e9a .gtr-heading-sub { font-size: 16px; font-weight: bold; margin-top: 1.5em; margin-bottom: 0.8em; color: #007bff; text-align: left; } .gtr-container-7f3e9a p { font-size: 14px; margin-bottom: 1em; text-align: left !important; } .gtr-container-7f3e9a strong { font-weight: bold; } .gtr-container-7f3e9a ul, .gtr-container-7f3e9a ol { list-style: none !important; padding-left: 25px; margin-bottom: 1em; } .gtr-container-7f3e9a li { position: relative; margin-bottom: 0.5em; font-size: 14px; text-align: left; list-style: none !important; } .gtr-container-7f3e9a ul li::before { content: "•" !important; position: absolute !important; left: 0 !important; color: #007bff; font-size: 1.2em; line-height: 1.6; } .gtr-container-7f3e9a ul ul li::before { content: "◦" !important; color: #007bff; } .gtr-container-7f3e9a ol { counter-reset: list-item; } .gtr-container-7f3e9a ol li { counter-increment: none; list-style: none !important; } .gtr-container-7f3e9a ol li::before { content: counter(list-item) "." !important; position: absolute !important; left: 0 !important; color: #007bff; width: 20px; text-align: right; margin-left: -25px; line-height: 1.6; } .gtr-container-7f3e9a img { margin-bottom: 1.5em; } @media (min-width: 768px) { .gtr-container-7f3e9a { max-width: 960px; margin: 0 auto; padding: 20px; } .gtr-container-7f3e9a .gtr-heading-main { font-size: 20px; } .gtr-container-7f3e9a .gtr-heading-sub { font-size: 18px; } } Imagine you're working on an edge AI project with the RK3588: the camera video stream needs to perform real-time face recognition and vehicle detection, while also supporting UI display, data upload, and business logic processing. You notice: frame drops occur when there are many objects in the frame, large models fail to run smoothly, and the temperature rises sharply. At this point, people usually say: "Your model is too large—RK3588's 6TOPS isn't enough." But is it really a lack of computing power? Have you ever wondered: Why does a 6TOPS NPU still experience frame drops and lag when running a 4TOPS model? The answer lies in three dimensions of NPU computing power: Peak Performance (TOPS), Precision (INT8/FP16), and Efficiency (Bandwidth). You will see that various chips emphasize their NPU specifications, with a core parameter prominently displayed: NPU Computing Power: X TOPS. Examples include RK3588-6TOPS, RK3576-6TOPS, RK1820-20TOPS, Hi3403V100-10TOPS, Hi3519DV500-2.5TOPS, Jetson Orin Nano-20/40TOPS, Jetson Orin NX-70/100TOPS, and so on... What is TOPS? Why is everyone talking about it? Tera: Represents 10¹². Operations Per Second: Refers to the total number of AI operations the NPU can perform in one second. In simple terms, 1 TOPS means the NPU can execute 1 trillion (10¹²) operations per second. How is TOPS calculated? The total number of MAC Units is the core of neural network computing. In convolutional layers and fully connected layers, the main computation involves multiplying input data by weights and then summing the results. The design philosophy of an NPU lies in having an extremely large array of parallel MAC units. An NPU chip may contain thousands or even tens of thousands of MAC units, which can work simultaneously to achieve large-scale parallel computing. The more MAC units there are, the greater the amount of computation the NPU can complete in a single clock cycle. Clock Frequency: Determines the number of cycles the NPU chip and its MAC units operate per second (measured in Hertz, Hz). A higher frequency allows the MAC array to perform more multiply-accumulate operations per unit time. When manufacturers announce TOPS, they use the NPU's peak operating frequency (i.e., the maximum achievable frequency). Operations per MAC: A complete MAC operation actually includes one multiplication and one addition. To align with the traditional FLOPS (Floating-Point Operations Per Second) counting method, many computing standards count one MAC operation as 2 basic operations (1 for multiplication and 1 for addition). Precision Factor: The MAC units of an NPU are optimized for processing low-precision data (e.g., INT8). Simplified speedup ratio of INT8 vs FP32: Since 32 bits / 8 bits = 4, a single FP32 unit can theoretically perform 4 times as many operations in one cycle when switched to INT8 computation. Therefore, if a manufacturer's TOPS is calculated based on INT8, it needs to be multiplied by a precision-related speedup ratio. This is why INT8 TOPS is much higher than FP32 TOPS. TOPS measures peak theoretical computing power. In practical applications, due to factors such as data transmission, memory constraints, and model structure, the actual effective computing power of an NPU is often lower than this peak value. Computing power is about speed; precision is about "fineness." Computing power tells us how fast an NPU runs, while computational precision tells us how finely it operates. Precision is another key dimension of NPU performance, determining the number of bits used and the representation range of data during computation. At the same TOPS level, the actual computing speed of INT8 is much faster than that of FP32. This is because the NPU's MAC units can process more 8-bit data at once and perform more operations. The NPU TOPS claimed by manufacturers are usually based on INT8 precision. When making comparisons, ensure that you are comparing TOPS under the same precision. High Precision (Typically Used for Training) FP32 (Single-Precision Floating-Point, 32-bit): Offers the largest numerical range and precision. Commonly used in traditional GPU and PC computing. Models typically adopt FP32 during the training phase to ensure accuracy. FP16/BF16 (Half-Precision Floating-Point, 16-bit): Reduces data volume by half while maintaining a certain level of precision, enabling faster computation and memory savings. Low Precision (Typically Used for Inference) INT8 (8-bit Integer): Currently the industry standard for evaluating inference performance of edge-side NPUs. The process of converting model weights and activation values from high precision (e.g., FP32) to 8-bit integers is called Quantization. INT4 (Lower Bit-Width): Features further compression, suitable for scenarios with extremely high requirements for power consumption and latency, but imposes higher demands on controlling model precision loss. How to Understand the Actual Performance of an NPU? When you see an NPU claiming 20 TOPS (INT8), you need to understand: The peak computing power is 20 trillion operations per second. This computing power is measured under 8-bit integer (INT8) precision. This means it is mainly used for AI inference (such as image recognition, speech processing, etc.), not training. The final performance depends on the application: The actual user experience (such as face unlock speed, real-time translation latency) relies not only on the NPU's TOPS but also on: Model quantization quality: Whether the quantized INT8 model maintains sufficient accuracy. Memory bandwidth: The speed of data input and output. Software stack and drivers: The optimization level of the toolchain and drivers provided by the chip manufacturer for model deployment. An NPU's computing power (TOPS) is an indicator of its speed, while computational precision (e.g., INT8) is key to its efficiency and applicability. For end-user-facing devices, manufacturers generally aim to maximize INT8 TOPS while maintaining acceptable precision loss, to achieve low-power and high-efficiency AI inference performance.
Lastest company news about Innovation and Breakthrough of Homegrown AI Chips: Opportunities and Challenges in the Edge-Terminal Era
Innovation and Breakthrough of Homegrown AI Chips: Opportunities and Challenges in the Edge-Terminal Era
.gtr-container-ai-insights-7f3d { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 16px; box-sizing: border-box; max-width: 100%; overflow-x: hidden; } .gtr-container-ai-insights-7f3d p { font-size: 14px; margin-bottom: 1em; text-align: left !important; word-break: normal; overflow-wrap: normal; } .gtr-container-ai-insights-7f3d .gtr-title-main { font-size: 18px; font-weight: bold; margin-top: 2em; margin-bottom: 1em; color: #0056b3; text-align: left; } .gtr-container-ai-insights-7f3d .gtr-title-sub { font-size: 16px; font-weight: bold; margin-top: 1.5em; margin-bottom: 0.8em; color: #0056b3; text-align: left; } .gtr-container-ai-insights-7f3d .gtr-image-wrapper-7f3d { margin: 2em 0; text-align: center; } .gtr-container-ai-insights-7f3d img { max-width: 100%; height: auto; display: block; margin-left: auto; margin-right: auto; } @media (min-width: 768px) { .gtr-container-ai-insights-7f3d { padding: 24px 40px; max-width: 960px; margin: 0 auto; } .gtr-container-ai-insights-7f3d p { margin-bottom: 1.2em; } .gtr-container-ai-insights-7f3d .gtr-title-main { font-size: 24px; margin-top: 2.5em; margin-bottom: 1.2em; } .gtr-container-ai-insights-7f3d .gtr-title-sub { font-size: 20px; margin-top: 2em; margin-bottom: 1em; } } After the explosion of large AI models, compute is no longer confined to the cloud; more and more intelligent algorithms now run locally on edge devices. Smart cameras recognize human shapes and behaviors, in-vehicle terminals monitor driving in real time, industrial cameras auto-detect defects, and robot vacuums identify targets offline. Edge AI has become the fastest-growing, most widely deployed, and strongest home-replacement battlefield, giving domestic SoC vendors a window in multimedia and AI processing. Edge-AI Market: The Fastest-Growing, Densest AI Battlefield Strictly speaking, the edge-AI market splits into edge-terminal and edge-server segments. Edge servers are bulky, costly, and high-compute, serving smart parks and factory edge nodes. Edge terminals (this article’s focus) are mass-volume, cost-sensitive, and fragmented in scenarios. They sense the environment and process video/voice on the spot, delivering edge-AI functions and empowering hardware. Which devices count as edge terminals? Smart cameras (IPC, smart doorbells, dash cams), industrial vision cameras and QC terminals, in-vehicle modules (AVM, DMS, DVR, ADAS assist), self-service retail kiosks, smart-home devices (speakers, vacuums, appliance controls), and smart-city edge nodes—all must run processing locally. Two Technical Paths for Edge-AI Chips: SoC Integration vs. Discrete AI Accelerator Edge-AI chips follow two paths: an SoC with built-in NPU for low-cost, low-power full intelligence, or a discrete AI accelerator that adds compute for multi-model, heavy-load professional inference. SoC (System on Chip) integrates CPU, GPU, AI, video, audio, and peripherals in one die. Built-in NPU has become industry consensus and the most adopted edge-AI approach. Rockchip RK3576, a general-purpose edge-AI SoC: 8-core CPU (4×A72 + 4×A53); 6-TOPS NPU (INT4/8/16, FP16); Mali-G52 GPU; 8-K decode, 4-K encode; multi MIPI-CSI for multi-camera, multi-display out (DSI, HDMI). Targets industrial tablets, AI cameras, vehicular DVR, robot vision. HiSilicon Hi3403V100 couples AI-ISP (image enhancement + AI co-optimization). A pro-vision SoC with quad A55, 10-TOPS NPU tightly merged with ISP. High-spec ISP excels in back-light and low-light; multi 4-K video I/O; high deployment efficiency for detection/tracking. How to split tasks efficiently among CPU, GPU, NPU and discrete accelerator—pre-processing on GPU/CPU, inference on NPU/accelerator, post-processing on CPU—is the key performance challenge. Hence AI accelerators were born, dedicated to inference and linked to the main SoC via PCIe. SoC handles system scheduling, video, graphics, UI; accelerator runs models and supplies AI compute. Rockchip RK1820, an NPU coprocessor for high-performance edge AI, acts as the “second brain.” 20-TOPS NPU, standalone model execution, INT8/16/FP16; pairs with RK3576/3588 via PCIe for higher inference. Homegrown AI-Chip Positioning: Win by “Picking the Right Track,” Not “Stacking Specs” In edge AI, TOPS, CPU cores, and process node matter, but survival hinges on choosing the right track. Rockchip: The Widest-Portfolio, Strongest-Ecosystem “General Vision-AI Platform” Rockchip’s aim is not the fastest chip but the richest ecosystem.Full compute ladder: RV1103/1106 for light cameras; RV1126/1109 for security default; RK3576 for mid/high terminals; RK3588 for flagship edge; RK3688 for next-gen high-compute core. This matrix is the universal base for homegrown gear—from low-power IPC to industrial gateways, AR glasses, robots, education boxes, vehicular DMS/CMS.Tech edge: balanced multimedia + ISP + AI; strong codecs, ISP, and mature RKNN tool-chain. Strategy is full-scene coverage, not single-point breakthrough. Allwinner: Ultra-Light AI + Ultra-Low-Power IoT Not for big models but for massive IoT and consumer devices.Position: low-power, high-volume, cost-sensitive. Smart speakers (rich I2S/PDM/mic support), light edge-AI cameras, small appliance control, TTS/voice terminals. V853/V831: ultra-light AI SoC. R-series: control MCU-SoC. Allwinner chases “10-million-unit scenarios,” not TOPS. Amlogic: Multimedia King, AI as Bonus Global leader in OTT boxes and smart-TV SoCs.Position: home media hub + consumer smart device. AI is an enhancer; core strengths are video decode, HDR, A/V sync, TV/OTT ecosystem. Strong in smart projectors, conference bars, home entertainment AIOs. Fullhan: Security-Vision Specialist Almost exclusively surveillance cameras.Position: security-dedicated SoC. Strengths: strong ISP, strong compression, strict cost control, tight alignment with Hikvision/Dahua ecosystems. Flagships: FH8856, FH8852. Fullhan digs deep into the single huge surveillance赛道, winning on stability and cost. Ingenic: Ultra-Low-Power + Ultra-Light AI MIPS-based, wearables and smart-home.Position: feather-weight smart devices, tiny packages. Apps: smart doorbells, light IPC, kids’ watches, micro edge nodes. Traits: lowest power, high integration, small footprint. AISoC series for light vision inference. Real Edge-AI Needs: Not More TOPS, but “Interface Matrix + Scenario Fit” For two years the talk was all TOPS—3, 6, 12—as if bigger numbers equal better chips. Core competence is never raw TOPS; it’s “interface matrix + scenario fit.” In security cams, industrial cameras, smart doorbells, vehicular DMS/ADAS, what counts is: enough camera ports? (MIPI-CSI, DVP), how many video streams? real-time encode? (H.264/H.265/8K/4K), ISP tuning quality? In industrial DTU, smart gateway, robot, energy scenarios, peripherals trump TOPS: dual GbE/2.5G/RGMII/SGMII, RS232/485/CAN/UART, Wi-Fi/BT, 4G/5G modules, multiple USB/SPI/I2C. In smart control panels, aftermarket car displays, AR/VR, smart POS, priorities shift to display ports (MIPI-DSI, HDMI, eDP), multi-screen support, UI performance (GPU/graphics), with AI as helper, not star. Aftermarket automotive keywords: shock resistance, voltage swing, ‑40-85 °C, eMMC lifetime, multi-CSI for DMS/OMS/ADAS, ms-level video latency. Edge AI is the best track for homegrown chips; opportunity comes not from stacking TOPS but from knowing the scene and nailing the demand. Over the next few years, smart cameras, vehicles, industry, and home devices will be the stage where domestic chips prove themselves.
Lastest company news about All-In-One Guide to Wi-Fi Protocol Evolution—Reach the Performance Summit with Wi-Fi 6!
All-In-One Guide to Wi-Fi Protocol Evolution—Reach the Performance Summit with Wi-Fi 6!
.gtr-container-w7x8y9 { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 20px; max-width: 100%; box-sizing: border-box; } .gtr-container-w7x8y9 p { font-size: 14px; margin-bottom: 1em; text-align: left !important; word-break: normal; overflow-wrap: normal; } .gtr-container-w7x8y9 .gtr-heading-main { font-size: 18px; font-weight: bold; color: #0056b3; margin-top: 2em; margin-bottom: 1em; text-align: left; } .gtr-container-w7x8y9 .gtr-heading-sub { font-size: 16px; font-weight: bold; color: #0056b3; margin-top: 1.5em; margin-bottom: 0.8em; text-align: left; } .gtr-container-w7x8y9 .gtr-protocol-item { margin-bottom: 1em; padding-left: 15px; position: relative; } .gtr-container-w7x8y9 .gtr-protocol-item::before { content: "•"; color: #0056b3; position: absolute; left: 0; font-size: 1.2em; line-height: 1; } .gtr-container-w7x8y9 .gtr-protocol-item strong { font-weight: bold; color: #333; font-size: 14px; } .gtr-container-w7x8y9 .gtr-tech-explanation { margin-bottom: 1.5em; } .gtr-container-w7x8y9 .gtr-tech-explanation strong { font-weight: bold; color: #333; font-size: 14px; display: block; margin-bottom: 0.5em; } .gtr-container-w7x8y9 .gtr-feature-block { margin-bottom: 1.5em; } .gtr-container-w7x8y9 .gtr-feature-block .gtr-feature-title { font-size: 14px; font-weight: bold; color: #333; margin-bottom: 0.5em; text-align: left; } .gtr-container-w7x8y9 .gtr-image-wrapper { margin-top: 2em; margin-bottom: 2em; text-align: center; } @media (min-width: 768px) { .gtr-container-w7x8y9 { padding: 30px 50px; } .gtr-container-w7x8y9 .gtr-heading-main { font-size: 20px; } .gtr-container-w7x8y9 .gtr-heading-sub { font-size: 18px; } } From the earliest dial-up connections to today’s everything-interconnected era, our pursuit of speed and stability has never stopped. Every Wi-Fi protocol innovation marks a giant leap in our smart lives. These protocols all originate from the IEEE 802.11 family of standards, evolving from 802.11b to today’s Wi-Fi 4/5/6. Early development and performance leaps: from 802.11b/g to 802.11ax 802.11b – 2.4 GHz band, 11 Mbps peak rate; laid the foundation and brought Wi-Fi to the mass market. 802.11a – 5 GHz band, 54 Mbps peak; first to adopt OFDM, but 5 GHz gear was scarce, so it never became widespread. 802.11g – 2.4 GHz band, 54 Mbps peak; merged the best of both—used OFDM in 2.4 GHz for higher speed while staying backward-compatible with 802.11b. 802.11n (Wi-Fi 4) – 2.4 & 5 GHz bands, 600 Mbps peak (4×4 MIMO, 40 MHz); introduced MIMO, broke the 100 Mb barrier, and added dual-band support. 802.11ac (Wi-Fi 5) – 5 GHz band only, 6.9 Gbps peak (8×8 MIMO, 160 MHz); brought in MU-MIMO (DL), widened channels, and boosted bandwidth. 802.11ax (Wi-Fi 6) – 2.4 & 5 GHz bands, 9.6 Gbps peak (8×8 MIMO, 160 MHz, 1024-QAM); offers high efficiency (capacity), low latency, low power, and strong anti-interference. MIMO (802.11n): Previously, data was transmitted over a single channel. MIMO uses multiple antennas to transmit and receive data simultaneously, enabling parallel multi-channel transmission and significantly increasing data rates and coverage. MU-MIMO (DL) (802.11ac): For the first time, it allows a router to send data to multiple terminal devices at the same time (downlink), effectively improving network efficiency in multi-device scenarios. Wi-Fi 5: supports DL MU-MIMO only; Wi-Fi 6: extends to both uplink and downlink. Wi-Fi 6: the pinnacle of efficiency and stability Wi-Fi 6 (802.11ax) is more than a speed boost—it’s an efficiency revolution that tackles congestion, latency, and power drain, laying the groundwork for next-gen IoT. MU-MIMO handles “high-throughput, large-packet data streams”; OFDMA handles “multi-device, small-packet scenarios.” OFDMA (Orthogonal Frequency-Division Multiple Access): Principle: Traditional Wi-Fi serves only one device at a time; OFDMA splits the channel into multiple RUs and can deliver data to several devices simultaneously. It divides a single data channel into many small sub-carriers (resource units) and transports small packets to multiple different devices at the same time. Advantage: Greatly reduces latency, especially in IoT scenarios with small data volumes but many devices, improving efficiency by up to 4×. UL/DL MU-MIMO (uplink/downlink multi-user MIMO): Wi-Fi 5 supports only downlink (router-to-device); Wi-Fi 6 adds bidirectional MU-MIMO, letting devices transmit to the router simultaneously and eliminating queuing delays. TWT (Target Wake Time): Principle: The router negotiates the next communication time with each device; the device can enter deep sleep outside the scheduled window. Advantage: Greatly reduces battery drain, extending IoT device battery life by 2–10. BSS Coloring & Spatial Reuse: By adding a “color tag” to BSS packets, the system intelligently identifies and ignores interference from neighboring networks, significantly improving stability and anti-interference ability in dense residential environments. Performance & Dual-Mode Integration: FD7352S Wi-Fi 6 Solution Neardi’s FD7352S module is built on the latest Wi-Fi 6 Wave 2 protocol and integrates all advanced technologies—an ideal choice for high-performance, high-reliability IoT products. 2T2R Architecture 2T2R MIMO: FD7352S uses two transmit (2T) and two receive (2R) antennas for high-performance transmission.Theoretical rates: 2.4 GHz – 572.4 Mbps, 5 GHz – 1.2 Gbps; measured throughput up to 550 Mbps.Modulation: 1024-QAM packs more data per symbol, ensuring smooth, stable HD video streams. Wi-Fi 6 & BT 5.4 Perfect Coexistence FD7352S is not only a Wi-Fi 6 module but also an 802.11ax + Bluetooth 5.4 dual-mode combo.Coexistence mechanism: In the 2.4 GHz band, Wi-Fi and Bluetooth often interfere. FD7352S’s hardware-level arbitration intelligently schedules Wi-Fi data and Bluetooth audio/control packets, keeping both stable—ideal for fast Bluetooth pairing plus high-quality Wi-Fi video.Bluetooth 5.4: Supports the latest BT v5.4, backward-compatible with BR/EDR/LE 1M/LE 2M/LE LR, providing reliable, low-power, long-range sensor connectivity. High-Integration Interfaces Supports SDIO 3.0 (high-speed data) + HS-UART (control) + PCM (HD audio), ensuring broad compatibility.With outstanding 2T2R performance, UL/DL OFDMA efficiency, TWT low-power operation, and Bluetooth 5.4 dual-mode coexistence, FD7352S delivers a one-stop solution for next-gen smart products.
Lastest company news about Authoritative Analysis | Why More and More Core Modules Choose Board-to-Board (B2B) Connections?
Authoritative Analysis | Why More and More Core Modules Choose Board-to-Board (B2B) Connections?
.gtr-container-x7y2z9 { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 15px; box-sizing: border-box; } .gtr-container-x7y2z9 p { font-size: 14px; margin-bottom: 1em; text-align: left !important; } .gtr-container-x7y2z9-title-main { font-size: 18px; font-weight: bold; margin-top: 1.5em; margin-bottom: 1em; text-align: left; color: #0056b3; /* A subtle industrial blue for main titles */ } .gtr-container-x7y2z9-title-sub { font-size: 16px; font-weight: bold; margin-top: 1.2em; margin-bottom: 0.8em; text-align: left; color: #007bff; /* A slightly lighter blue for sub-titles */ } .gtr-container-x7y2z9 .gtr-table-wrapper { overflow-x: auto; margin-top: 1.5em; margin-bottom: 1.5em; } .gtr-container-x7y2z9 table { width: 100%; border-collapse: collapse !important; border-spacing: 0 !important; margin: 0 !important; padding: 0 !important; table-layout: fixed; /* Ensures columns are evenly distributed */ min-width: 600px; /* Ensure table is scrollable on small screens if content is wide */ } .gtr-container-x7y2z9 th, .gtr-container-x7y2z9 td { border: 1px solid #ccc !important; padding: 10px !important; text-align: left !important; vertical-align: top !important; font-size: 14px !important; word-break: normal !important; overflow-wrap: normal !important; } .gtr-container-x7y2z9 th { font-weight: bold !important; background-color: #f0f0f0; /* Light gray background for headers */ color: #333; } .gtr-container-x7y2z9 tr:nth-child(even) { background-color: #f9f9f9; /* Zebra striping */ } .gtr-container-x7y2z9 img { height: auto; /* Allow images to scale proportionally */ display: block; /* Ensure images are block-level for proper spacing */ margin: 1.5em 0; /* Add vertical spacing around images */ } @media (min-width: 768px) { .gtr-container-x7y2z9 { padding: 25px; max-width: 960px; /* Constrain width on larger screens for readability */ margin: 0 auto; /* Center the component */ } .gtr-container-x7y2z9 table { min-width: auto; /* Remove min-width on larger screens */ table-layout: auto; /* Allow table to adjust column widths naturally */ } } In the design of a core module, the connection method is often overlooked, yet it determines the structural stability, signal integrity, and maintainability of the entire system. Over the past few years, a growing number of core modules, development boards, and even the main control systems of entire devices have begun to evolve toward board-to-board connections. Why are more and more manufacturers switching to this solution? Is it really superior? Today, we will thoroughly explain everything from structural design to mass production practice in one go. Complete analysis of mainstream interconnects: who uses them and what are their pros and cons? In high-compute, high-interface-density SoC systems, board-to-board connectors have become the preferred solution that balances signal integrity with mechanical reliability. Interconnect Typical Use Advantages Drawbacks LCC small form-factor modules low cost, easy to solder non-removable, poor long-term reliability edge-card high-speed hot-plug applications reliable contact, mature volume process severe mechanical constraints, restricted PCB outline FPC flex-cable ultra-thin or foldable devices flexible routing, thin profile weak EMI shielding, limited mechanical stability board-to-board connector industrial motherboards, AI compute modules high-density mating, robust, field-serviceable slightly higher cost, tight placement tolerance Why Choose Board-to-Board Connectors? Key Strengths Breakdown High-Density Signal Transport: Engineered for Speed-Hungry SoCs. With high-performance SoCs like the RK3588 and RK3576 going mainstream, module-to-carrier signaling is no longer a “few-dozen-line" task—it’s a hundred-plus high-speed channel problem. Board-to-board connectors readily deliver 40–120 pins of high-speed signals while maintaining tight impedance control and excellent signal-integrity (SI) performance. The LKB3576 carrier board employs four Panasonic AXK5F80537YG board-to-board connectors—80-position, 0.5 mm pitch—secured with four M2 screws. Compared to FPC or LCC castellated holes, board-to-board connectors deliver:- Lower signal loss, especially at 2–5 Gbps;- Stronger EMI shielding through well-grounded pin-to-pin isolation;- Controllable mating tolerance—precision pin-and-socket alignment within ±0.05 mm. AI motherboards, industrial gateways, automotive head units, and machine-vision hosts all run multiple simultaneous MIPI, USB 3.0, PCIe, and Gigabit-Ethernet links; board-to-board interconnects preserve the stability and uniformity of these high-speed signals better than any alternative. Superior mechanical robustness and vibration resistance In automotive and industrial settings, prolonged vibration and thermal cycling readily loosen interconnects. FPC flex cables in these environments often suffer EMI pickup, signal drift, or intermittent contacts. Board-to-board connectors, built with metal pins and press-fit sockets, give three mechanical edges: - High vibration immunity: 60–80 N insertion force survives repeated shock and shake- Gold-plated contacts: maintain low-resistance paths over thousands of thermal cycles- Rigid mounting: optional screws and locating posts lock the mated pair to the chassis, eliminating micromotion Faster assembly & field service: streamlining volume production For production-line engineers, the biggest draw of board-to-board is solder-free + reusable mating.- Core modules plug in and pull out in seconds; no reflow required.- When a board fails, swap the top module—carrier stays in the chassis.- Cuts SMT cost and lifetime service cost.- Zero high-temperature cycles, so no heat-stress damage.- Assembly / disassembly throughput rises 3–5*.- Larger alignment window allows semi-automatic insertion, forgiving normal handling tolerances. High space efficiency: optimized for compact designs As embedded devices push toward smaller and thinner form factors, board-to-board connectors enable a vertical stack: two PCBs sit almost face-to-face, maximizing volumetric efficiency. - Module thickness drops to 2–6 mm- Shorter internal traces give cleaner signal paths- A tidier enclosure eases heat-spreading and shielding design Product Case Study – LKD3576 Development Board SoC: RK3576, octa-core 64-bit (4*A72 + 4*A53), ARM Mali-G52 MC3 GPU, 6 TOPS NPUCodec: 4K60 fps H.264/AVC decode, 8K30 fps or 4K120 fps H.265/HEVC decode; 4K60 fps H.264/H.265 encodeMemory: RAM supports LPDDR4/4X/5, ROM supports eMMC 5.1; options 4 GB+32 GB, 8 GB+64 GB, 16 GB+128 GBOS support: Android, Ubuntu, Buildroot, Debian, openEuler, KylinInterconnect: four 80-pin, 0.5 mm pitch, 2 mm height; socket AXK5F80537YG, header AXK6F80347YG, Panasonic board-to-board connector Board-to-board connection: easy assembly and maintenance, industrial-grade rich interfaces, multi-type expansion support, anti-vibration and anti-interference design, stable long-term operation, suitable for in-vehicle control, AI edge-computing terminals, and industrial smart gateways. Board-to-board connection is becoming the new standard in embedded hardware design, offering a balanced solution among performance, reliability, and maintainability.
Lastest company news about 3TOPS Edge Computing Benchmark | Rockchip RV1126 Series Full Analysis
3TOPS Edge Computing Benchmark | Rockchip RV1126 Series Full Analysis
.gtr-container-x7y3z1 { font-family: Verdana, Helvetica, "Times New Roman", Arial, sans-serif; color: #333; line-height: 1.6; padding: 20px; max-width: 1200px; margin: 0 auto; box-sizing: border-box; } .gtr-container-x7y3z1 p { font-size: 14px; margin-bottom: 16px; text-align: left; word-break: normal; overflow-wrap: normal; } .gtr-container-x7y3z1 .gtr-section-title { font-size: 18px; font-weight: bold; margin-top: 30px; margin-bottom: 15px; color: #0056b3; text-align: left; } .gtr-container-x7y3z1 .gtr-subsection-title { font-size: 16px; font-weight: bold; margin-top: 20px; margin-bottom: 10px; color: #007bff; text-align: left; } .gtr-container-x7y3z1 ul, .gtr-container-x7y3z1 ol { list-style: none !important; margin: 0 0 16px 0; padding: 0; } .gtr-container-x7y3z1 ul li, .gtr-container-x7y3z1 ol li { position: relative; padding-left: 25px; margin-bottom: 8px; font-size: 14px; text-align: left; list-style: none !important; } .gtr-container-x7y3z1 ul li::before { content: "•" !important; position: absolute !important; left: 0 !important; color: #007bff; font-size: 16px; line-height: 1.6; } .gtr-container-x7y3z1 ol li::before { content: counter(list-item) "." !important; position: absolute !important; left: 0 !important; color: #007bff; font-weight: bold; font-size: 14px; line-height: 1.6; width: 20px; text-align: right; } .gtr-container-x7y3z1 strong { font-weight: bold; color: #0056b3; } @media (min-width: 768px) { .gtr-container-x7y3z1 { padding: 30px 40px; } .gtr-container-x7y3z1 .gtr-section-title { font-size: 20px; margin-top: 40px; margin-bottom: 20px; } .gtr-container-x7y3z1 .gtr-subsection-title { font-size: 18px; margin-top: 25px; margin-bottom: 12px; } .gtr-container-x7y3z1 p, .gtr-container-x7y3z1 ul li, .gtr-container-x7y3z1 ol li { font-size: 14px; } } Rockchip has established a comprehensive portfolio of visual computing chips, spanning from 0.5T to 6T, covering everything from basic smart cameras to advanced industrial vision systems. A standout in this lineup is the RV1126 series, specifically the RV1126B and RV1126B-P. These variants are extensively used in smart security (IPC, smart doorbells, dashcams), industrial vision (factory cameras, inspection equipment), smart automotive and home applications, smart cities, and edge AI scenarios. The RV1126B, with its 3TOPS compute power, customized AI-ISP architecture, dynamic stitching, stabilization, advanced encoding, and hardware-level security, delivers high - performance solutions that enhance AIoT devices from simply "seeing" to truly "understanding." RV1126B-P is a cost- and package-optimized variant of the RV1126B, retaining full core compute power (CPU/GPU/VPU/NPU) while reducing pins, removing USB 3.0, and trimming auxiliary interfaces (CAM_CLK/SARADC) to lower costs. It targets low-storage, bandwidth-light applications like dashcams and DMS devices. Key advantages include: Pin-to-Pin Compatibility: Direct replacement for RV1126 with no hardware redesign needed Same Core Performance: Equivalent compute, ISP, and AI capabilities as RV1126B Seamless Upgrades: Minimal software changes required for existing RV1126-based products to achieve RV1126B-level performance This makes RV1126B-P an ideal drop-in upgrade for manufacturers looking to enhance products with minimal R&D investment. The RV1126B, built on a quad-core Cortex-A53 (1.5GHz) architecture, integrates Rockchip’s in-house 3TOPS NPU. It supports W4A16/W8A16 mixed-precision quantization and Transformer-optimized acceleration, enabling efficient on-device execution of large models and multimodal models with parameters up to 2B. For imaging, it features a dedicated AI-ISP engine with AI Remosaic technology for "day-and-night adaptive imaging," achieving clear imaging in ultra-low-light conditions (0.01Lux). This addresses nighttime noise issues and, combined with 6-DOF digital stabilization and dual/quadruple-camera dynamic stitching, ensures stable and wide-angle imagery even in motion. At the system level, RV1126B’s AOV3.0 low-power architecture reduces standby power to 1mW. It supports 24/7 anomaly sound wake-up (e.g., barking, glass breaking, gunshots), balancing energy savings with real-time alerts. The integrated Super Encoding Engine reduces bitrate by 50% without clarity loss, lowering transmission and storage costs. For security, it offers SM2/SM3/SM4 encryption, TrustZone isolation, and a key management system, providing end-to-end protection from data capture to inference. Processor Architecture RV1126B and RV1126B-P: Core Configuration: Quad-core ARM Cortex-A53, 64-bit architecture, supports ARM v8-A instruction set. Cache Specification: 32KB L1 I-Cache + 32KB L1 D-Cache per core, with a shared 512KB L2 Cache. Extended Features: Integrated Neon Advanced SIMD and FPU, supports TrustZone technology. RV1126: Core Configuration: Quad-core ARM Cortex-A7, 32-bit architecture, supports ARM v7-A instruction set, with integrated Neon Advanced SIMD and FPU. Cache Specification: 32KB L1 I-Cache + 32KB L1 D-Cache per core, with a shared 512KB unified L2 Cache. NPU Performance RV1126B and RV1126B-P: Compute Power: 3.0 TOPs INT8 (sparse optimization supported), compatible with INT4/INT8/INT16/FP16 operations. Framework Support: TensorFlow, Caffe, Tflite, Pytorch, Onnx NN, Android NN, etc. RV1126: Compute Power and Precision: 2.0 TOPS with INT8/INT16 hybrid operations, supporting 8/16-bit integer convolution. Framework Compatibility: TensorFlow, TF-lite, Pytorch, Caffe, ONNX, MXNet, Keras, Darknet, etc., with OpenVX API support. ISP Features RV1126B & RV1126B-P: 2D Graphics Engine (RGA) Supported Data Formats: Input: ARGB/RGB/YUV series formats (including TILE4X4 packaging) Output: ARGB/RGB/YUV420/422 and other 8-bit formats Core Functions: Scaling: 1/16× to 16× non-integer scaling (downsampling with averaging/bilinear filtering, upsampling with bicubic filtering) Rotation: 0/90/180/270 degrees with mirroring Additional: Alpha blending, color filling, OSD overlay Resolution Limits: Max Input: 8192×8192 Max Output: 4096×4096 Video Encoding and Decoding Both RV1126 and RV1126B support H.265/H.264 video encoding and decoding, enabling efficient compression, storage, and transmission of 4K UHD video. RV1126B specifically supports multi-stream encoding, with a built-in intelligent encoding engine for ultra-HD encoding at up to 8 MP @ 45 FPS. It also features dynamic bitrate adjustment, reducing bitrate by up to 50% compared to traditional CBR mode. Both RV1126B and RV1126 offer a variety of audio, storage, and peripheral interfaces, and support high-performance external DRAM to meet basic multimedia and peripheral expansion needs. RV1126B and RV1126 integrate system-level functional modules such as RTC, POR, RMI Ethernet PHY, and audio codecs. RV1126 supports 12Kbit One-Time Programmable (OTP) memory for unique identification and secure storage. RV1126B introduces AI-ISP that can work with the NPU, while RV1126 does not support AI-ISP. RV1126B and RV1126B-P support dual-channel CANFD, making them suitable for automotive and industrial control applications. RV1126B supports USB 3.0, whereas RV1126B-P and RV1126 only support USB 2.0. Deployed across multiple industries, the RV1126B series, with its 3TOPS AI power and AI-ISP architecture, is widely used in IPC, smart doorbells, and AI PTZ cameras. Its high integration and low - power features upgrade video recording to smart analysis.