Do you remember the previous controversy over the 'Blue, Black, White, and Gold Skirt'? This disagreement stems from the differences in the cone-shaped cells used to distinguish colors in our eyes, and the debate over the 'blue, black, white, and gold skirt' in human-machine interfaces also exists. What is the reason for this? This article reveals your secrets and introduces the design of human-computer interaction interfaces from a color perspective.
When we look at the world like this, people know that putting an elephant in the refrigerator takes three steps, and putting the world into the brain with the human eye can also be simply divided into three steps:
1. The eyeball senses the image (the sensor collects and converts it into a digital signal)
2. Convert into neural signals and transmit them to the brain (through communication systems, the signals are transmitted to the processor)
3. Brain processing and storage (the processor converts it into a format that can be displayed and stored on the screen)
Human visual system
The image format that the human eye begins to see is transformed from light signals into electrical and chemical signals in the nerves, and the propagation format is already different. Similarly, machines also need to convert.
1、 The format in the brain - RGB image format
Firstly, the screen is composed of individual pixels, and the brilliant colors inside are only derived from the three primary colors of red, green, and blue on the pixels. This color representation method is called RGB color space representation (which is also the most commonly used color space representation method in multimedia computer technology), as shown in the following figure:
Tricolor diagram
According to the principle of three primary colors, any color of light F can be formed by adding and mixing different components of R, G, and B. As shown in formula 1.1.
Formula 1.1 Principle of three primary colors
White light is a mixture of multiple types of light, so when the coefficient of the three primary colors is maximum, it is white, when it is zero, it is black, and between the two is the world's hundred colors.
Each pixel is like a paint box. The larger the box, the more colors it contains, the richer the colors it can express. The size of this box is called storage space in a computer, and the way to adjust colors is to change the content of the three primary colors. The lower the table below, the larger the storage space required. However, the more accurate the colors each pixel can describe, the more realistic the screen image will be.
2、 Format on the eyeball - YUV image format
When we store, in order to save space and facilitate packaging, we encode the brightness signal Y and the two color difference signals R-Y (red brightness, U) and B-Y (blue brightness, V) separately. Then, we send them out and convert them back to RGB format at the display terminal. This color representation method is called YUV color space representation.
At this point, you may ask, 'Hey, where is G (green)?' In fact, adding brightness to the two colors can roughly express the original colors through algorithms, so even if they are integrated into R and B.
Compared with RGB video signal transmission, the biggest advantage of YUV is that it only requires a very small amount of bandwidth (RGB requires three independent video signals to be transmitted simultaneously). The difference in bandwidth usage tonnage between the two formats is shown in the following figure, and RGB occupies a much larger bandwidth.
RGB format has higher bandwidth than YUV
Did we choose the YUV format without hesitation just to save some bandwidth? Of course not! Although low bandwidth is crucial, color is also crucial.
The more important aspect of using YUV color space is that its brightness signal Y and chromaticity signals U and V are separated. This separation not only avoids mutual interference, but also reduces the sampling rate of chromaticity without significantly affecting image quality. If U and V are zero, there is no color and it becomes a black and white TV. Of course, Y is also an important parameter. In fact, when we look at a color, its depth is very different, and the depth depends on the brightness Y. The influence of Y is shown in the following figure.
Brightness variation chart
Below, we will introduce a YUV format for everyone to learn by analogy.
YUV 4:2:2:
'4' represents 4 Y's in the stored stream code;
'2' indicates that there are 2 U color differences in the stored stream code;
The second '2' indicates that there are 2 V color differences in the stored stream code.
The following four pixels are: [Y0 U0 V0] [Y1 U1 V1] [Y2 U2 V2] [Y3 U3 V3]
The stored stream is: Y0 U0 Y1 V1 Y2 U2 Y3 V3
The mapped pixel points are: [Y0 U0 V1] [Y1 U0 V1] [Y2 U2 V3] [Y3 U2 V3]
YUV sampling network
The above diagram shows a YUV4:2:2 sampling network, with light samples (Y) represented by crosses and chromaticity samples (U, V) represented by circles. Each point has a fork, while the circle is only half, so that's why the four Y's of the storage stream code on top are complete, while U and V are only half.
3、 Development of interactive interfaces
The development of images is mainly aimed at improving the aesthetics of human-computer interaction interfaces. Currently, the design of human-computer interaction interfaces is based on emWin and QT. Using QT/E often requires running an embedded operating system on a microcontroller, so there are certain requirements for the performance of the MCU. In addition, if you have not been exposed to QT/E before, it will take some time and cost to apply it. In contrast, emWin is more suitable for rapid and streamlined UI development, but the interface interaction effect and aesthetics are lower.
emWin_Demo
ZLG Zhiyuan Electronics has been developing the next generation embedded development platform AWorks platform for 12 years, which integrates the GUI programming framework - AWUI. AWUI currently supports Qt and emWin, using Designer to edit the interface, and C++to develop ViewModel/Model. This allows developers to run applications on Qt and emWin without having to learn the API of Qt and emWin (to ensure that the control is supported on emWin).
Based on AWUI, ZLG plans to launch a wider and more user-friendly AWTK within the year. AWTK contains rich GUI components internally, creating a groundbreaking 'drag and drop' GUI programming mode, greatly improving the efficiency of GUI programming. Paired with a good design architecture, it combines the characteristics of emWin low memory smooth operation and Qt high-quality interface effect, ensuring the smoothness and stability of the interactive interface. This enables embedded UI development to be integrated into the AWorks platform in a component manner, and can quickly achieve interactive interface development on this platform.
AWUI Development Plan
UI Framework in AWorks
The ZLG Zhiyuan Electronics M1052 crossover core board supports the AWorks embedded development platform, which not only has the strong processing performance of MPU, but also takes into account the simplicity, ease of use, and real-time advantages of MCU microcontrollers! Pre installed AWorks real-time operating system, designed for intelligent hardware and industrial IoT applications.