What is the principle of power management technology at runtime?

Under normal circumstances, some design and runtime power management techniques can be deployed in real-time embedded systems. Although we are discussing some specific power management technologies that can expand the standard multi-threaded read operating system (OS), it should be emphasized that the use of preemptive multi-threaded read OS itself can often achieve significant Power saving. Real-time applications that do not utilize the OS often require the application to periodically poll the interface to detect events. From the power point of view, such efficiency is quite low.Using the OS enables applications to take advantage of the interrupt-driven mode, where the program will start execution when needed in response to external events

Under normal circumstances, some design and runtime power management techniques can be deployed in real-time embedded systems. Although we are discussing some specific power management technologies that can expand the standard multi-threaded read operating system (OS), it should be emphasized that the use of preemptive multi-threaded read OS itself can often achieve significant Power saving. Real-time applications that do not utilize the OS often require the application to periodically poll the interface to detect events. From the power point of view, such efficiency is quite low. Using the OS enables applications to take advantage of the interrupt-driven mode, where the program will start executing when needed in response to external events. In addition, when the OS-based application has nothing to do, it enters an idle thread, and then a low-power operation mode can be started to reduce power consumption.

However, in addition to simply enabling the idle mode for the DSP core, the operating system also needs to provide much more complex power management support. In practice, a large amount of power is consumed by peripheral devices, which may be on-chip devices or external devices, and memory also consumes a large amount of power. Any power management method should have support for managing the power consumption of peripherals, which is crucial. In addition, the square relationship between voltage and power consumption means that a more efficient method is to execute code at a lower clock rate that requires a lower voltage, rather than executing at the highest clock rate and then turning to idle. We will outline the many opportunities for implementing power management support in the operating system:

System power-on behavior: The processor and its on-chip peripherals are generally powered on at the highest clock rate. Inevitably, some resources are not yet needed for power supply startup, or will not be used in the application process at all. For example, MP3 players rarely use their USB ports to communicate with PCs. At startup, the operating system must provide a mechanism for applications to regulate the system, thereby turning off unnecessary power-consuming devices or making them in an idle state.

What is the principle of power management technology at runtime?

Idle mode: The effective power consumption in the CMOS circuit only occurs when the circuit is clocked. By turning off unnecessary clocks, unnecessary effective power consumption can be eliminated. While waiting for external events, most DSPs incorporate a mechanism to temporarily stop the effective power consumption of the CPU. The “idle” of the CPU clock is usually triggered by a “stop” or “idle” instruction, which is called when the application or operating system is idle. Some DSPs partition multiple clock domains, which can make these domains in an idle state respectively to stop the effective power consumption in unused modules. For example, in TI’s TMS320C5510 DSP, six clock domains can be selectively idled, including CPU, cache, DMA, peripheral clock, clock generator, and external memory interface.

In addition to supporting idle DSPs and their on-chip peripherals, the operating system must also provide mechanisms for idle external peripheral devices. For example, some codecs have built-in low-power modes that can be activated. One of the challenges we face is peripherals such as watchdog timers. Under normal circumstances, the watchdog timer should provide services according to a predefined time interval to avoid its activation. In this way, power management techniques that slow down or abort processing may inadvertently cause application failures. Therefore, the OS should enable the application to disable such peripherals during sleep mode.

Power down: Although the idle mode eliminates effective power consumption, static power consumption occurs even when the circuit is not switched. This is mainly due to reverse-bias leakage. If a module included in the system does not need to be powered at any time, then we can reduce power consumption by allowing the operating system to power up the subsystem only when needed. So far, embedded system developers have done very little to minimize static power consumption because the static power consumption of CMOS circuits is very low. However, new, higher-performance transistors have significantly increased current leakage, which requires us to pay new attention to reduced static power consumption and more complex sleep modes.

Voltage and frequency scaling (frequency scaling) The effective power consumption is linearly proportional to the switching frequency, but proportional to the square of the power supply voltage. Running the application at a lower frequency is not much power savings compared to running the application at the full clock frequency and turning to idle. However, if the frequency is compatible with the lower operating voltage available on the platform, then we may achieve significant savings by reducing the voltage, which is precisely due to the above-mentioned square relationship. This also makes people conduct a lot of academic research on how to save power through voltage scaling.

Although voltage scaling is a potential and very attractive way to reduce power consumption, we should be careful when using it in real-world applications. This is because we need to fully understand whether the system can still meet its real-time deadlines. Lowering the voltage (and thus lowering the CPU frequency) will change the execution time of a given task, which may cause the character to miss the real-time deadline. Even if the new frequency is compatible with the deadline, there will still be problems if the waiting time for the switching frequency and voltage is too long. Factors affecting waiting time include the following:

* Time required to reprogram the regulator

* Can the DSP continue to execute any other code during the voltage change

Peripherals need to be reprogrammed, such as serial ports or external memory interfaces, to interface with peripheral peripherals that receive different sources. For example, a reduction in the CPU clock rate may require a reduction in the number of wait states to access external memory.

The possibility of reprogramming the timer used to generate the operating system clock signal will affect the absolute correctness of the operating system time base.

Although the actual waiting time for voltage scaling will vary depending on the DSP selected and the number of peripherals that need to be reprogrammed, in many systems, the waiting time is only a few hundred microseconds or even a few milliseconds. In many real-time applications, this will make voltage scaling impractical. Despite the above-mentioned weaknesses, it is still possible to use voltage scaling for applications that require full processing power only in certain predictable modes. For example, a portable music player can use DSP to perform MP3 decoding and general control processing required by the user interface. If only MP3 decoding requires a full clock rate, then the DSP can reduce its voltage when performing user interface functions, and only work with full power when the music data starts to flow to the DSP.

The Links:   2MBI100N-060 SKM100GB176D

Author: Yoyokuo