feat(benchmark): add trace output for running a specific scenario (#3245)

* feat(benchmark): add trace output for running a specific scenario

* Update lv_demo_benchmark.c
This commit is contained in:
Gabriel Wang
2022-04-05 09:52:07 +01:00
committed by GitHub
parent bf85b50031
commit c6d4b6e554
2 changed files with 36 additions and 7 deletions

View File

@@ -14,6 +14,7 @@ On to top of the screen the title of the current test step, and the result of th
- In `lv_conf.h` or equivalent places set `LV_USE_DEMO_BENCHMARK 1`
- After `lv_init()` and initializing the drivers call `lv_demo_benchmark()`
- If you only want to run a specific scene for any purpose (e.g. debug, performance optimization etc.), you can call `lv_demo_benchmark_run_scene()` instead of `lv_demo_benchmark()`and pass the scene number.
- If you enabled trace output by setting macro `LV_USE_LOG` to `1` and trace level `LV_LOG_LEVEL` to `LV_LOG_LEVEL_USER` or higher, benchmark results are printed out in `csv` format.
## Interpret the result
@@ -33,6 +34,32 @@ By default, only the changed areas are refreshed. It means if only a few pixels
![LVGL benchmark running](https://github.com/lvgl/lvgl/tree/master/demos/benchmark/screenshot1.png?raw=true)
If you are doing performance analysis for 2D image processing optimization, LCD latency (flushing data to LCD) introduced by `disp_flush()` might dilute the performance results of the LVGL drawing process, hence make it harder to see your optimization results (gain or loss). To avoid such problem, please:
1. Temporarily remove the code for flushing data to LCD inside `disp_flush()`. For example:
```c
static void disp_flush(lv_disp_drv_t * disp_drv, const lv_area_t * area, lv_color_t * color_p)
{
#if 0 //!< remove LCD latency
GLCD_DrawBitmap(area->x1, //!< x
area->y1, //!< y
area->x2 - area->x1 + 1, //!< width
area->y2 - area->y1 + 1, //!< height
(const uint8_t *)color_p);
#endif
/*IMPORTANT!!!
*Inform the graphics library that you are ready with the flushing*/
lv_disp_flush_ready(disp_drv);
}
```
2. Use trace output to get the benchmark results by:
- Setting macro `LV_USE_LOG` to `1`
- Setting trace level `LV_LOG_LEVEL` to `LV_LOG_LEVEL_USER` or higher.
## Result summary
In the end, a table is created to display measured FPS values.

View File

@@ -875,15 +875,17 @@ static void report_cb(lv_timer_t * timer)
if(opa_mode) {
lv_label_set_text_fmt(subtitle, "Result of \"%s\": %"LV_PRId32" FPS", scenes[scene_act].name,
scenes[scene_act].fps_normal);
LV_LOG("Result of \"%s\": %"LV_PRId32" FPS", scenes[scene_act].name,
scenes[scene_act].fps_normal);
}
else if(scene_act > 0) {
lv_label_set_text_fmt(subtitle, "Result of \"%s + opa\": %"LV_PRId32" FPS", scenes[scene_act - 1].name,
scenes[scene_act - 1].fps_opa);
LV_LOG("Result of \"%s + opa\": %"LV_PRId32" FPS", scenes[scene_act - 1].name,
scenes[scene_act - 1].fps_opa);
}
else {
if(scene_act > 0) {
lv_label_set_text_fmt(subtitle, "Result of \"%s + opa\": %"LV_PRId32" FPS", scenes[scene_act - 1].name,
scenes[scene_act - 1].fps_opa);
}
else {
lv_label_set_text(subtitle, "");
}
lv_label_set_text(subtitle, "");
}
}