feat(gpu): reattach nxp pxp vglite accelerators(#3322)

* feat(pxp/vglite) Attach NXP GPUs to the new draw context.

Create a single NXP draw layer on top of PXP and VGLITE.
Extra changes:
1. Add VGLITE image blits acceleration.
2. Reenable blit split workaround for quality issue in RT500.
3. Increase threshold from 32 to 5000 px to fill/blit with both PXP and VGLITE.
4. Allow to enable both PXP and VGLITE. Add a fallback mechanism:
- by default the PXP will try to accelerate. if that is not supported (or fails
due to threshold limit condition from 3.) then it will fallback to VGLITE.
- if VGLITE does not support that feature (or fails due to threshold limit
condition from 3.) the it will fallback to CPU.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(vglite) Add VGLITE support to draw the backgroud of rectangles.

optim: draw only a circle when radius has value LV_RADIUS_CIRCLE.
optim: to draw rounded corners, use cubic bezier curves

Signed-off-by: Seb Fagard <sebastien.fagard@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(vglite) Add VGLITE support to draw arcs.

Use up to 4 bezier curves to optimize the drawing of an arc.
The arc curve has to be constant when growing the angle:
for this we compute sub-arc based on a best approximation of quarter-arc.
use dichotomy to find the sub-arc 't' param, instead of tangent approximation.

Signed-off-by: Seb Fagard <sebastien.fagard@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(pxp) Add ARGB88888 format support for PXP backend.

Supports per pixel and global alpha blending, and combination of alpha blending with recolor feature.

Signed-off-by: Jerome Evillard <jerome.evillard@nxp.com>
Signed-off-by: Seb Fagard <sebastien.fagard@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(vglite) Add the support of ARGB 32bits color format.

Signed-off-by: Seb Fagard <sebastien.fagard@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(vglite) Add VGLITE acceleration support for rotation and zoom.

Signed-off-by: Seb Fagard <sebastien.fagard@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(pxp) Add PXP acceleratin to rotate in blit.

Applies the rotation on the pxp output.

Signed-off-by: Seb Fagard <sebastien.fagard@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(pxp/vglite) Avoid calling the blend callback from image decoded callback.

Add lv_gpu_nxp_pxp_blit_transform() and lv_gpu_nxp_vglite_blit_transform().
This will simplify a lot the fallback mechanism and the way of adding new
image decoded features.
(MGG-884)

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(pxp) Add image rotate 90x.

Simplify the two steps process by adding the blit_cover and blit_opa functions.

In order to rotate or recolor with opacity, two steps must be fallowed:
1. Run the operation without opa.
2. Blend the result by applying the opa.
(MGG-469)

Obs:
Recolor and rotate is currently not supported with opa or alpha channel.(MGG-883)

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(vglite) Fix incorrect slider widget indicator when vglite acceleration is enabled

(MGG-863)

Signed-off-by: Stefan Babatie <stefan.babatie@nxp.com>
Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(pxp) Fix pxp fill (simple rect draw)  on 32 bit color depth.

(MGG-890)

Signed-off-by: Stefan Babatie <stefan.babatie@nxp.com>

* fix(pxp) Separate blit simple by the blit with transformation.

While we run into blit with transformation we have to
decide if the operation needs to be done in one or two steps.
Blit with color format (opa or alpha or chroma key) - require one step: blit_cf().
Blit with rotate or recolor but no color format - require one step: blit_cover().
Blit with rotate or recolor + opa or alpha - require two steps:
blit_opa() = blit_cover() + blit_cf().
Blit with rotate or recolor + chroma key - not supported yet.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(pxp) Add support for recolor with chroma-keying.

(MGG-434)

Signed-off-by: Stefan Babatie <stefan.babatie@nxp.com>

* fix(pxp) Fix temporary buffer allocation limit.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(pxp) Add PXP limitation while rotating images not aligned to 16x16 blocks.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(nxp) Add makefiles.

Tested with:
      working-directory: tests/makefile
      run: make test_file

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* feat(nxp) Update NXP github documentation.

(MGG-864)

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(nxp) Move the API comments of global functions only in the H files to make maintenance simpler.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(nxp/vglite) Fixed some warnings. Remove unused variables.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(nxp) lv_draw_nxp_ctx_deinit() shall simply fallback to sofware call.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(nxp) Fallback to software callbacks if need for argb8565 support.

During rendering, LVGL might initializes new draw_ctxs and start drawing into
a separate buffer (called layer). If the content to be rendered has "holes",
e.g. rounded corner, LVGL temporarily sets the disp_drv.screen_transp flag.
It means the renderers should draw into an ARGB buffer.
With 32 bit color depth it's not a big problem but with 16 bit color depth
the target pixel format is ARGB8565 which is not supported by the GPU.
In this case, the NXP callbacks should fallback to SW rendering.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(pxp) Fix wrong demo widget transparency.

1. Fix alpha inverted calculation for fill with opacity.
2. Fix also the ratio of recoloring for chroma key.

Signed-off-by: Stefan Babatie <stefan.babatie@nxp.com>

* fix(vglite) Remove software pre-multiplication when hardware pre-multiplication is available.

(MGG-886)

Signed-off-by: Stefan Babatie <stefan.babatie@nxp.com>

* doc(vglite) Add vglite initialization info.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(pxp/vglite) Fix unused variable warnings when PXP/VGLite are not enabled simultaneously.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

* fix(pxp) Fixed the color key recoloring on 16 bits color depth.

1. Fixed the arguments of lv_color_mix(), previously was using an inversed logic. But it works anyway.
2. Use LV_COLOR_SET_X for adjusting the channels on both 16 and 32 bits. Fixed the max values.

Signed-off-by: Nicușor Cîțu <nicusor.citu@nxp.com>

Co-authored-by: Stefan Babatie <stefan.babatie@nxp.com>
This commit is contained in:
nicusorcitu
2022-05-24 12:43:57 +03:00
committed by GitHub
parent 52287fd64a
commit 029eef79c4
28 changed files with 3857 additions and 1541 deletions

View File

@@ -0,0 +1,9 @@
CSRCS += lv_gpu_nxp.c
DEPPATH += --dep-path $(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp
VPATH += :$(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp
CFLAGS += "-I$(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp"
include $(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/pxp/lv_draw_nxp_pxp.mk
include $(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/vglite/lv_draw_nxp_vglite.mk

414
src/draw/nxp/lv_gpu_nxp.c Normal file
View File

@@ -0,0 +1,414 @@
/**
* @file lv_gpu_nxp.c
*
*/
/**
* MIT License
*
* Copyright 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_gpu_nxp.h"
#if LV_USE_GPU_NXP_PXP || LV_USE_GPU_NXP_VG_LITE
/*
* allow to use both PXP and VGLITE
* both 2D accelerators can be used at the same time:
* thus VGLITE can be used to accelerate widget drawing
* while PXP accelerates Blit & Fill operations.
*/
#if LV_USE_GPU_NXP_PXP
#include "pxp/lv_draw_pxp_blend.h"
#endif
#if LV_USE_GPU_NXP_VG_LITE
#include "vglite/lv_draw_vglite_blend.h"
#include "vglite/lv_draw_vglite_rect.h"
#include "vglite/lv_draw_vglite_arc.h"
#endif
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
static void lv_draw_nxp_img_decoded(lv_draw_ctx_t * draw_ctx, const lv_draw_img_dsc_t * dsc,
const lv_area_t * coords, const uint8_t * map_p, lv_img_cf_t cf);
static void lv_draw_nxp_blend(lv_draw_ctx_t * draw_ctx, const lv_draw_sw_blend_dsc_t * dsc);
static void lv_draw_nxp_rect(lv_draw_ctx_t * draw_ctx, const lv_draw_rect_dsc_t * dsc, const lv_area_t * coords);
static lv_res_t draw_nxp_bg(lv_draw_ctx_t * draw_ctx, const lv_draw_rect_dsc_t * dsc, const lv_area_t * coords);
static void lv_draw_nxp_arc(lv_draw_ctx_t * draw_ctx, const lv_draw_arc_dsc_t * dsc, const lv_point_t * center,
uint16_t radius, uint16_t start_angle, uint16_t end_angle);
/**********************
* STATIC VARIABLES
**********************/
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
void lv_draw_nxp_ctx_init(lv_disp_drv_t * drv, lv_draw_ctx_t * draw_ctx)
{
lv_draw_sw_init_ctx(drv, draw_ctx);
lv_draw_nxp_ctx_t * nxp_draw_ctx = (lv_draw_sw_ctx_t *)draw_ctx;
nxp_draw_ctx->base_draw.draw_arc = lv_draw_nxp_arc;
nxp_draw_ctx->base_draw.draw_rect = lv_draw_nxp_rect;
nxp_draw_ctx->base_draw.draw_img_decoded = lv_draw_nxp_img_decoded;
nxp_draw_ctx->blend = lv_draw_nxp_blend;
//nxp_draw_ctx->base_draw.wait_for_finish = lv_draw_nxp_wait_cb;
}
void lv_draw_nxp_ctx_deinit(lv_disp_drv_t * drv, lv_draw_ctx_t * draw_ctx)
{
lv_draw_sw_deinit_ctx(drv, draw_ctx);
}
/**********************
* STATIC FUNCTIONS
**********************/
/**
* During rendering, LVGL might initializes new draw_ctxs and start drawing into
* a separate buffer (called layer). If the content to be rendered has "holes",
* e.g. rounded corner, LVGL temporarily sets the disp_drv.screen_transp flag.
* It means the renderers should draw into an ARGB buffer.
* With 32 bit color depth it's not a big problem but with 16 bit color depth
* the target pixel format is ARGB8565 which is not supported by the GPU.
* In this case, the NXP callbacks should fallback to SW rendering.
*/
static inline bool need_argb8565_support()
{
#if LV_COLOR_DEPTH != 32
lv_disp_t * disp = _lv_refr_get_disp_refreshing();
if(disp->driver->screen_transp == 1)
return true;
#endif
return false;
}
static void lv_draw_nxp_blend(lv_draw_ctx_t * draw_ctx, const lv_draw_sw_blend_dsc_t * dsc)
{
lv_area_t blend_area;
/*Let's get the blend area which is the intersection of the area to fill and the clip area.*/
if(!_lv_area_intersect(&blend_area, dsc->blend_area, draw_ctx->clip_area))
return; /*Fully clipped, nothing to do*/
/*Make the blend area relative to the buffer*/
lv_area_move(&blend_area, -draw_ctx->buf_area->x1, -draw_ctx->buf_area->y1);
bool done = false;
/*Fill/Blend only non masked, normal blended*/
if(dsc->mask_buf == NULL && dsc->blend_mode == LV_BLEND_MODE_NORMAL && !need_argb8565_support()) {
lv_color_t * dest_buf = draw_ctx->buf;
lv_coord_t dest_stride = lv_area_get_width(draw_ctx->buf_area);
#if LV_USE_GPU_NXP_VG_LITE
lv_coord_t dest_width = lv_area_get_width(draw_ctx->buf_area);
lv_coord_t dest_height = lv_area_get_height(draw_ctx->buf_area);
#endif
const lv_color_t * src_buf = dsc->src_buf;
if(src_buf == NULL) {
#if LV_USE_GPU_NXP_PXP
done = (lv_gpu_nxp_pxp_fill(dest_buf, dest_stride, &blend_area,
dsc->color, dsc->opa) == LV_RES_OK);
if(!done)
PXP_LOG_TRACE("PXP fill failed. Fallback.");
#endif
#if LV_USE_GPU_NXP_VG_LITE
if(!done) {
done = (lv_gpu_nxp_vglite_fill(dest_buf, dest_width, dest_height, &blend_area,
dsc->color, dsc->opa) == LV_RES_OK);
if(!done)
VG_LITE_LOG_TRACE("VG-Lite fill failed. Fallback.");
}
#endif
}
else {
#if LV_USE_GPU_NXP_PXP
done = (lv_gpu_nxp_pxp_blit(dest_buf, &blend_area, dest_stride, src_buf, dsc->blend_area,
dsc->opa, LV_DISP_ROT_NONE) == LV_RES_OK);
if(!done)
PXP_LOG_TRACE("PXP blit failed. Fallback.");
#endif
#if LV_USE_GPU_NXP_VG_LITE
if(!done) {
lv_gpu_nxp_vglite_blit_info_t blit;
lv_coord_t src_stride = lv_area_get_width(dsc->blend_area);
blit.src = src_buf;
blit.src_width = lv_area_get_width(dsc->blend_area);
blit.src_height = lv_area_get_height(dsc->blend_area);
blit.src_stride = src_stride * (int32_t)sizeof(lv_color_t);
blit.src_area.x1 = (blend_area.x1 - (dsc->blend_area->x1 - draw_ctx->buf_area->x1));
blit.src_area.y1 = (blend_area.y1 - (dsc->blend_area->y1 - draw_ctx->buf_area->y1));
blit.src_area.x2 = blit.src_area.x1 + blit.src_width - 1;
blit.src_area.y2 = blit.src_area.y1 + blit.src_height - 1;
blit.dst = dest_buf;
blit.dst_width = dest_width;
blit.dst_height = dest_height;
blit.dst_stride = dest_stride * (int32_t)sizeof(lv_color_t);
blit.dst_area.x1 = blend_area.x1;
blit.dst_area.y1 = blend_area.y1;
blit.dst_area.x2 = blend_area.x2;
blit.dst_area.y2 = blend_area.y2;
blit.opa = dsc->opa;
blit.zoom = LV_IMG_ZOOM_NONE;
blit.angle = 0;
done = (lv_gpu_nxp_vglite_blit(&blit) == LV_RES_OK);
if(!done)
VG_LITE_LOG_TRACE("VG-Lite blit failed. Fallback.");
}
#endif
}
}
if(!done)
lv_draw_sw_blend_basic(draw_ctx, dsc);
}
static void lv_draw_nxp_img_decoded(lv_draw_ctx_t * draw_ctx, const lv_draw_img_dsc_t * dsc,
const lv_area_t * coords, const uint8_t * map_p, lv_img_cf_t cf)
{
/*Use the clip area as draw area*/
lv_area_t draw_area;
lv_area_copy(&draw_area, draw_ctx->clip_area);
bool mask_any = lv_draw_mask_is_any(&draw_area);
#if LV_USE_GPU_NXP_VG_LITE
bool recolor = (dsc->recolor_opa != LV_OPA_TRANSP);
#endif
#if LV_USE_GPU_NXP_PXP
bool scale = (dsc->zoom != LV_IMG_ZOOM_NONE);
#endif
bool done = false;
lv_area_t blend_area;
/*Let's get the blend area which is the intersection of the area to fill and the clip area.*/
if(!_lv_area_intersect(&blend_area, coords, draw_ctx->clip_area))
return; /*Fully clipped, nothing to do*/
/*Make the blend area relative to the buffer*/
lv_area_move(&blend_area, -draw_ctx->buf_area->x1, -draw_ctx->buf_area->y1);
const lv_color_t * src_buf = (const lv_color_t *)map_p;
if(!src_buf) {
lv_draw_sw_img_decoded(draw_ctx, dsc, coords, map_p, cf);
return;
}
lv_color_t * dest_buf = draw_ctx->buf;
lv_coord_t dest_stride = lv_area_get_width(draw_ctx->buf_area);
#if LV_USE_GPU_NXP_PXP
if(!mask_any && !scale && !need_argb8565_support()
#if LV_COLOR_DEPTH!=32
&& !lv_img_cf_has_alpha(cf)
#endif
) {
done = (lv_gpu_nxp_pxp_blit_transform(dest_buf, &blend_area, dest_stride, src_buf, coords,
dsc, cf) == LV_RES_OK);
if(!done)
PXP_LOG_TRACE("PXP blit transform failed. Fallback.");
}
#endif
#if LV_USE_GPU_NXP_VG_LITE
if(!done && !mask_any && !need_argb8565_support() &&
!lv_img_cf_is_chroma_keyed(cf) && !recolor
#if LV_COLOR_DEPTH!=32
&& !lv_img_cf_has_alpha(cf)
#endif
) {
lv_gpu_nxp_vglite_blit_info_t blit;
lv_coord_t src_stride = lv_area_get_width(coords);
blit.src = src_buf;
blit.src_width = lv_area_get_width(coords);
blit.src_height = lv_area_get_height(coords);
blit.src_stride = src_stride * (int32_t)sizeof(lv_color_t);
blit.src_area.x1 = (blend_area.x1 - (coords->x1 - draw_ctx->buf_area->x1));
blit.src_area.y1 = (blend_area.y1 - (coords->y1 - draw_ctx->buf_area->y1));
blit.src_area.x2 = blit.src_area.x1 + blit.src_width - 1;
blit.src_area.y2 = blit.src_area.y1 + blit.src_height - 1;
blit.dst = dest_buf;
blit.dst_width = lv_area_get_width(draw_ctx->buf_area);
blit.dst_height = lv_area_get_height(draw_ctx->buf_area);
blit.dst_stride = dest_stride * (int32_t)sizeof(lv_color_t);
blit.dst_area.x1 = blend_area.x1;
blit.dst_area.y1 = blend_area.y1;
blit.dst_area.x2 = blend_area.x2;
blit.dst_area.y2 = blend_area.y2;
blit.opa = dsc->opa;
blit.angle = dsc->angle;
blit.pivot = dsc->pivot;
blit.zoom = dsc->zoom;
done = (lv_gpu_nxp_vglite_blit_transform(&blit) == LV_RES_OK);
if(!done)
VG_LITE_LOG_TRACE("VG-Lite blit transform failed. Fallback.");
}
#endif
if(!done)
lv_draw_sw_img_decoded(draw_ctx, dsc, coords, map_p, cf);
}
static void lv_draw_nxp_rect(lv_draw_ctx_t * draw_ctx, const lv_draw_rect_dsc_t * dsc, const lv_area_t * coords)
{
bool done = false;
lv_draw_rect_dsc_t nxp_dsc;
lv_memcpy(&nxp_dsc, dsc, sizeof(nxp_dsc));
#if LV_DRAW_COMPLEX
/* Draw only the shadow */
nxp_dsc.bg_opa = 0;
nxp_dsc.bg_img_opa = 0;
nxp_dsc.border_opa = 0;
nxp_dsc.outline_opa = 0;
lv_draw_sw_rect(draw_ctx, &nxp_dsc, coords);
/* Draw the background */
nxp_dsc.shadow_opa = 0;
nxp_dsc.bg_opa = dsc->bg_opa;
done = (draw_nxp_bg(draw_ctx, &nxp_dsc, coords) == LV_RES_OK);
#endif /*LV_DRAW_COMPLEX*/
/* Draw the remaining parts */
nxp_dsc.shadow_opa = 0;
if(done)
nxp_dsc.bg_opa = 0;
nxp_dsc.bg_img_opa = dsc->bg_img_opa;
nxp_dsc.border_opa = dsc->border_opa;
nxp_dsc.outline_opa = dsc->outline_opa;
lv_draw_sw_rect(draw_ctx, &nxp_dsc, coords);
}
static lv_res_t draw_nxp_bg(lv_draw_ctx_t * draw_ctx, const lv_draw_rect_dsc_t * dsc, const lv_area_t * coords)
{
if(dsc->bg_opa <= LV_OPA_MIN)
return LV_RES_INV;
lv_area_t bg_coords;
lv_area_copy(&bg_coords, coords);
/*If the border fully covers make the bg area 1px smaller to avoid artifacts on the corners*/
if(dsc->border_width > 1 && dsc->border_opa >= (lv_opa_t)LV_OPA_MAX && dsc->radius != 0) {
bg_coords.x1 += (dsc->border_side & LV_BORDER_SIDE_LEFT) ? 1 : 0;
bg_coords.y1 += (dsc->border_side & LV_BORDER_SIDE_TOP) ? 1 : 0;
bg_coords.x2 -= (dsc->border_side & LV_BORDER_SIDE_RIGHT) ? 1 : 0;
bg_coords.y2 -= (dsc->border_side & LV_BORDER_SIDE_BOTTOM) ? 1 : 0;
}
lv_area_t clipped_coords;
if(!_lv_area_intersect(&clipped_coords, &bg_coords, draw_ctx->clip_area))
return LV_RES_INV;
lv_grad_dir_t grad_dir = dsc->bg_grad.dir;
lv_color_t bg_color = grad_dir == LV_GRAD_DIR_NONE ? dsc->bg_color : dsc->bg_grad.stops[0].color;
if(bg_color.full == dsc->bg_grad.stops[1].color.full) grad_dir = LV_GRAD_DIR_NONE;
bool mask_any = lv_draw_mask_is_any(&bg_coords);
/*
* Most simple case: just a plain rectangle (no mask, no radius, no gradient)
* shall fallback to lv_draw_sw_blend().
*
* Complex case: gradient or radius but no mask.
*/
if(!mask_any && ((dsc->radius != 0) || (grad_dir != LV_GRAD_DIR_NONE)) && !need_argb8565_support()) {
#if LV_USE_GPU_NXP_VG_LITE
lv_res_t res = lv_gpu_nxp_vglite_draw_bg(draw_ctx, dsc, &bg_coords);
if(res != LV_RES_OK)
VG_LITE_LOG_TRACE("VG-Lite draw bg failed. Fallback.");
return res;
#endif
}
return LV_RES_INV;
}
static void lv_draw_nxp_arc(lv_draw_ctx_t * draw_ctx, const lv_draw_arc_dsc_t * dsc, const lv_point_t * center,
uint16_t radius, uint16_t start_angle, uint16_t end_angle)
{
bool done = false;
#if LV_DRAW_COMPLEX
if(dsc->opa <= LV_OPA_MIN)
return;
if(dsc->width == 0)
return;
if(start_angle == end_angle)
return;
#if LV_USE_GPU_NXP_VG_LITE
if(!need_argb8565_support()) {
done = (lv_gpu_nxp_vglite_draw_arc(draw_ctx, dsc, center, (int32_t)radius,
(int32_t)start_angle, (int32_t)end_angle) == LV_RES_OK);
if(!done)
VG_LITE_LOG_TRACE("VG-Lite draw arc failed. Fallback.");
}
#endif
#endif/*LV_DRAW_COMPLEX*/
if(!done)
lv_draw_sw_arc(draw_ctx, dsc, center, radius, start_angle, end_angle);
}
#endif /*LV_USE_GPU_NXP_PXP || LV_USE_GPU_NXP_VG_LITE*/

71
src/draw/nxp/lv_gpu_nxp.h Normal file
View File

@@ -0,0 +1,71 @@
/**
* @file lv_gpu_nxp.h
*
*/
/**
* MIT License
*
* Copyright 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_GPU_NXP_H
#define LV_GPU_NXP_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_PXP || LV_USE_GPU_NXP_VG_LITE
#include "../sw/lv_draw_sw.h"
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
typedef lv_draw_sw_ctx_t lv_draw_nxp_ctx_t;
/**********************
* GLOBAL PROTOTYPES
**********************/
void lv_draw_nxp_ctx_init(struct _lv_disp_drv_t * drv, lv_draw_ctx_t * draw_ctx);
void lv_draw_nxp_ctx_deinit(struct _lv_disp_drv_t * drv, lv_draw_ctx_t * draw_ctx);
/**********************
* MACROS
**********************/
#endif /*LV_USE_GPU_NXP_PXP || LV_USE_GPU_NXP_VG_LITE*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_GPU_NXP_H*/

View File

@@ -0,0 +1,8 @@
CSRCS += lv_draw_pxp_blend.c
CSRCS += lv_gpu_nxp_pxp_osa.c
CSRCS += lv_gpu_nxp_pxp.c
DEPPATH += --dep-path $(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/pxp
VPATH += :$(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/pxp
CFLAGS += "-I$(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/pxp"

View File

@@ -0,0 +1,632 @@
/**
* @file lv_draw_pxp_blend.c
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_draw_pxp_blend.h"
#if LV_USE_GPU_NXP_PXP
/*********************
* DEFINES
*********************/
#if LV_COLOR_16_SWAP
#error Color swap not implemented. Disable LV_COLOR_16_SWAP feature.
#endif
#if LV_COLOR_DEPTH==16
#define PXP_OUT_PIXEL_FORMAT kPXP_OutputPixelFormatRGB565
#define PXP_AS_PIXEL_FORMAT kPXP_AsPixelFormatRGB565
#define PXP_PS_PIXEL_FORMAT kPXP_PsPixelFormatRGB565
#elif LV_COLOR_DEPTH==32
#define PXP_OUT_PIXEL_FORMAT kPXP_OutputPixelFormatARGB8888
#define PXP_AS_PIXEL_FORMAT kPXP_AsPixelFormatARGB8888
#define PXP_PS_PIXEL_FORMAT kPXP_PsPixelFormatRGB888
#elif
#error Only 16bit and 32bit color depth are supported. Set LV_COLOR_DEPTH to 16 or 32.
#endif
#if defined (__alpha__) || defined (__ia64__) || defined (__x86_64__) \
|| defined (_WIN64) || defined (__LP64__) || defined (__LLP64__)
#define ALIGN_SIZE 8
#else
#define ALIGN_SIZE 4
#endif
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
/**
* BLock Image Transfer - copy rectangular image from src buffer to dst buffer
* with combination of transformation (rotation, scale, recolor) and opacity, alpha channel
* or color keying. This requires two steps. First step is used for transformation into
* a temporary buffer and the second one will handle the color format or opacity.
*
* @param[in/out] dest_buf destination buffer
* @param[in] dest_area area to be copied from src_buf to dst_buf
* @param[in] dest_stride width (stride) of destination buffer in pixels
* @param[in] src_buf source buffer
* @param[in] src_area source area with absolute coordinates to draw on destination buffer
* @param[in] dsc image descriptor
* @param[in] cf color format
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
static lv_res_t lv_gpu_nxp_pxp_blit_opa(lv_color_t * dest_buf, const lv_area_t * dest_area, lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf);
/**
* BLock Image Transfer - copy rectangular image from src buffer to dst buffer
* with transformation and full opacity.
*
* @param[in/out] dest_buf destination buffer
* @param[in] dest_area area to be copied from src_buf to dst_buf
* @param[in] dest_stride width (stride) of destination buffer in pixels
* @param[in] src_buf source buffer
* @param[in] src_area source area with absolute coordinates to draw on destination buffer
* @param[in] dsc image descriptor
* @param[in] cf color format
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
static lv_res_t lv_gpu_nxp_pxp_blit_cover(lv_color_t * dest_buf, const lv_area_t * dest_area,
lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf);
/**
* BLock Image Transfer - copy rectangular image from src buffer to dst buffer
* without transformation but handling color format or opacity.
*
* @param[in/out] dest_buf destination buffer
* @param[in] dest_area area to be copied from src_buf to dst_buf
* @param[in] dest_stride width (stride) of destination buffer in pixels
* @param[in] src_buf source buffer
* @param[in] src_area source area with absolute coordinates to draw on destination buffer
* @param[in] dsc image descriptor
* @param[in] cf color format
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
static lv_res_t lv_gpu_nxp_pxp_blit_cf(lv_color_t * dest_buf, const lv_area_t * dest_area,
lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf);
/**********************
* STATIC VARIABLES
**********************/
/**********************
* MACROS
**********************/
#define ROUND_UP(x, align) ((x + (align - 1)) & ~(align - 1))
/**********************
* GLOBAL FUNCTIONS
**********************/
lv_res_t lv_gpu_nxp_pxp_fill(lv_color_t * dest_buf, lv_coord_t dest_stride, const lv_area_t * fill_area,
lv_color_t color, lv_opa_t opa)
{
uint32_t area_size = lv_area_get_size(fill_area);
lv_coord_t area_w = lv_area_get_width(fill_area);
lv_coord_t area_h = lv_area_get_height(fill_area);
if(opa >= (lv_opa_t)LV_OPA_MAX) {
if(area_size < LV_GPU_NXP_PXP_FILL_SIZE_LIMIT) {
PXP_LOG_TRACE("Area size %d smaller than limit %d.", area_size, LV_GPU_NXP_PXP_FILL_SIZE_LIMIT);
return LV_RES_INV;
}
}
else {
if(area_size < LV_GPU_NXP_PXP_FILL_OPA_SIZE_LIMIT) {
PXP_LOG_TRACE("Area size %d smaller than limit %d.", area_size, LV_GPU_NXP_PXP_FILL_OPA_SIZE_LIMIT);
return LV_RES_INV;
}
}
PXP_Init(LV_GPU_NXP_PXP_ID);
PXP_EnableCsc1(LV_GPU_NXP_PXP_ID, false); /*Disable CSC1, it is enabled by default.*/
PXP_SetProcessBlockSize(LV_GPU_NXP_PXP_ID, kPXP_BlockSize16); /*Block size 16x16 for higher performance*/
/*OUT buffer configure*/
pxp_output_buffer_config_t outputConfig = {
.pixelFormat = PXP_OUT_PIXEL_FORMAT,
.interlacedMode = kPXP_OutputProgressive,
.buffer0Addr = (uint32_t)(dest_buf + dest_stride * fill_area->y1 + fill_area->x1),
.buffer1Addr = (uint32_t)NULL,
.pitchBytes = dest_stride * sizeof(lv_color_t),
.width = area_w,
.height = area_h
};
PXP_SetOutputBufferConfig(LV_GPU_NXP_PXP_ID, &outputConfig);
if(opa >= (lv_opa_t)LV_OPA_MAX) {
/*Simple color fill without opacity - AS disabled*/
PXP_SetAlphaSurfacePosition(LV_GPU_NXP_PXP_ID, 0xFFFFU, 0xFFFFU, 0U, 0U);
}
else {
/*Fill with opacity - AS used as source (same as OUT)*/
pxp_as_buffer_config_t asBufferConfig = {
.pixelFormat = PXP_AS_PIXEL_FORMAT,
.bufferAddr = (uint32_t)outputConfig.buffer0Addr,
.pitchBytes = outputConfig.pitchBytes
};
PXP_SetAlphaSurfaceBufferConfig(LV_GPU_NXP_PXP_ID, &asBufferConfig);
PXP_SetAlphaSurfacePosition(LV_GPU_NXP_PXP_ID, 0U, 0U, area_w, area_h);
}
/*Disable PS, use as color generator*/
PXP_SetProcessSurfacePosition(LV_GPU_NXP_PXP_ID, 0xFFFFU, 0xFFFFU, 0U, 0U);
PXP_SetProcessSurfaceBackGroundColor(LV_GPU_NXP_PXP_ID, lv_color_to32(color));
/**
* Configure Porter-Duff blending - src settings are unused for fill without opacity (opa = 0xff).
*
* Note: srcFactorMode and dstFactorMode are inverted in fsl_pxp.h:
* srcFactorMode is actually applied on PS alpha value
* dstFactorMode is actually applied on AS alpha value
*/
pxp_porter_duff_config_t pdConfig = {
.enable = 1,
.dstColorMode = kPXP_PorterDuffColorNoAlpha,
.srcColorMode = kPXP_PorterDuffColorNoAlpha,
.dstGlobalAlphaMode = kPXP_PorterDuffGlobalAlpha,
.srcGlobalAlphaMode = kPXP_PorterDuffGlobalAlpha,
.dstFactorMode = kPXP_PorterDuffFactorStraight,
.srcFactorMode = (opa >= (lv_opa_t)LV_OPA_MAX) ? kPXP_PorterDuffFactorStraight : kPXP_PorterDuffFactorInversed,
.dstGlobalAlpha = opa,
.srcGlobalAlpha = opa,
.dstAlphaMode = kPXP_PorterDuffAlphaStraight, /*don't care*/
.srcAlphaMode = kPXP_PorterDuffAlphaStraight /*don't care*/
};
PXP_SetPorterDuffConfig(LV_GPU_NXP_PXP_ID, &pdConfig);
lv_gpu_nxp_pxp_run(); /*Start PXP task*/
return LV_RES_OK;
}
lv_res_t lv_gpu_nxp_pxp_blit(lv_color_t * dest_buf, const lv_area_t * dest_area, lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area, lv_opa_t opa, lv_disp_rot_t angle)
{
uint32_t dest_size = lv_area_get_size(dest_area);
lv_coord_t dest_w = lv_area_get_width(dest_area);
lv_coord_t dest_h = lv_area_get_height(dest_area);
if(opa >= (lv_opa_t)LV_OPA_MAX) {
if(dest_size < LV_GPU_NXP_PXP_BLIT_SIZE_LIMIT) {
PXP_LOG_TRACE("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_PXP_BLIT_SIZE_LIMIT);
return LV_RES_INV;
}
}
else {
if(dest_size < LV_GPU_NXP_PXP_BLIT_OPA_SIZE_LIMIT) {
PXP_LOG_TRACE("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_PXP_BLIT_OPA_SIZE_LIMIT);
return LV_RES_INV;
}
}
PXP_Init(LV_GPU_NXP_PXP_ID);
PXP_EnableCsc1(LV_GPU_NXP_PXP_ID, false); /*Disable CSC1, it is enabled by default.*/
PXP_SetProcessBlockSize(LV_GPU_NXP_PXP_ID, kPXP_BlockSize16); /*block size 16x16 for higher performance*/
/* convert rotation angle */
pxp_rotate_degree_t pxp_rot;
switch(angle) {
case LV_DISP_ROT_NONE:
pxp_rot = kPXP_Rotate0;
break;
case LV_DISP_ROT_90:
pxp_rot = kPXP_Rotate90;
break;
case LV_DISP_ROT_180:
pxp_rot = kPXP_Rotate180;
break;
case LV_DISP_ROT_270:
pxp_rot = kPXP_Rotate270;
break;
default:
pxp_rot = kPXP_Rotate0;
break;
}
PXP_SetRotateConfig(LV_GPU_NXP_PXP_ID, kPXP_RotateOutputBuffer, pxp_rot, kPXP_FlipDisable);
pxp_as_blend_config_t asBlendConfig = {
.alpha = opa,
.invertAlpha = false,
.alphaMode = kPXP_AlphaRop,
.ropMode = kPXP_RopMergeAs
};
if(opa >= (lv_opa_t)LV_OPA_MAX) {
/*Simple blit, no effect - Disable PS buffer*/
PXP_SetProcessSurfacePosition(LV_GPU_NXP_PXP_ID, 0xFFFFU, 0xFFFFU, 0U, 0U);
}
else {
pxp_ps_buffer_config_t psBufferConfig = {
.pixelFormat = PXP_PS_PIXEL_FORMAT,
.swapByte = false,
.bufferAddr = (uint32_t)(dest_buf + dest_stride * dest_area->y1 + dest_area->x1),
.bufferAddrU = 0U,
.bufferAddrV = 0U,
.pitchBytes = dest_stride * sizeof(lv_color_t)
};
asBlendConfig.alphaMode = kPXP_AlphaOverride;
PXP_SetProcessSurfaceBufferConfig(LV_GPU_NXP_PXP_ID, &psBufferConfig);
PXP_SetProcessSurfacePosition(LV_GPU_NXP_PXP_ID, 0U, 0U, dest_w - 1, dest_h - 1);
}
lv_coord_t src_stride = lv_area_get_width(src_area);
/*AS buffer - source image*/
pxp_as_buffer_config_t asBufferConfig = {
.pixelFormat = PXP_AS_PIXEL_FORMAT,
.bufferAddr = (uint32_t)src_buf,
.pitchBytes = src_stride * sizeof(lv_color_t)
};
PXP_SetAlphaSurfaceBufferConfig(LV_GPU_NXP_PXP_ID, &asBufferConfig);
PXP_SetAlphaSurfacePosition(LV_GPU_NXP_PXP_ID, 0U, 0U, dest_w - 1U, dest_h - 1U);
PXP_SetAlphaSurfaceBlendConfig(LV_GPU_NXP_PXP_ID, &asBlendConfig);
PXP_EnableAlphaSurfaceOverlayColorKey(LV_GPU_NXP_PXP_ID, false);
/*Output buffer.*/
pxp_output_buffer_config_t outputBufferConfig = {
.pixelFormat = (pxp_output_pixel_format_t)PXP_OUT_PIXEL_FORMAT,
.interlacedMode = kPXP_OutputProgressive,
.buffer0Addr = (uint32_t)(dest_buf + dest_stride * dest_area->y1 + dest_area->x1),
.buffer1Addr = (uint32_t)0U,
.pitchBytes = dest_stride * sizeof(lv_color_t),
.width = dest_w,
.height = dest_h
};
PXP_SetOutputBufferConfig(LV_GPU_NXP_PXP_ID, &outputBufferConfig);
lv_gpu_nxp_pxp_run(); /* Start PXP task */
return LV_RES_OK;
}
lv_res_t lv_gpu_nxp_pxp_blit_transform(lv_color_t * dest_buf, const lv_area_t * dest_area, lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf)
{
uint32_t dest_size = lv_area_get_size(dest_area);
if(dsc->opa >= (lv_opa_t)LV_OPA_MAX) {
if(dest_size < LV_GPU_NXP_PXP_BLIT_SIZE_LIMIT) {
PXP_LOG_TRACE("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_PXP_BLIT_SIZE_LIMIT);
return LV_RES_INV;
}
}
else {
if(dest_size < LV_GPU_NXP_PXP_BLIT_OPA_SIZE_LIMIT) {
PXP_LOG_TRACE("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_PXP_BLIT_OPA_SIZE_LIMIT);
return LV_RES_INV;
}
}
bool recolor = (dsc->recolor_opa != LV_OPA_TRANSP);
bool rotation = (dsc->angle != 0);
if(rotation) {
if(dsc->angle != 0 && dsc->angle != 900 && dsc->angle != 1800 && dsc->angle != 2700) {
PXP_LOG_TRACE("Rotation angle %d is not supported. PXP can rotate only 90x angle.", dsc->angle);
return LV_RES_INV;
}
}
if(recolor || rotation) {
if(dsc->opa >= (lv_opa_t)LV_OPA_MAX && !lv_img_cf_has_alpha(cf) && !lv_img_cf_is_chroma_keyed(cf))
return lv_gpu_nxp_pxp_blit_cover(dest_buf, dest_area, dest_stride, src_buf, src_area, dsc, cf);
else
/*Recolor and/or rotation with alpha or opacity is done in two steps.*/
return lv_gpu_nxp_pxp_blit_opa(dest_buf, dest_area, dest_stride, src_buf, src_area, dsc, cf);
}
return lv_gpu_nxp_pxp_blit_cf(dest_buf, dest_area, dest_stride, src_buf, src_area, dsc, cf);
}
/**********************
* STATIC FUNCTIONS
**********************/
static lv_res_t lv_gpu_nxp_pxp_blit_opa(lv_color_t * dest_buf, const lv_area_t * dest_area, lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf)
{
lv_coord_t dest_w = lv_area_get_width(dest_area);
lv_coord_t dest_h = lv_area_get_height(dest_area);
lv_res_t res;
uint32_t size = dest_w * dest_h * sizeof(lv_color_t);
if(ROUND_UP(size, ALIGN_SIZE) >= LV_MEM_SIZE)
PXP_RETURN_INV("Insufficient memory for temporary buffer. Please increase LV_MEM_SIZE.");
lv_color_t * tmp_buf = (lv_color_t *)lv_mem_buf_get(size);
if(!tmp_buf)
PXP_RETURN_INV("Allocating temporary buffer failed.");
const lv_area_t tmp_area = {
.x1 = 0,
.y1 = 0,
.x2 = dest_w - 1,
.y2 = dest_h - 1
};
/*Step 1: Transform with full opacity to temporary buffer*/
res = lv_gpu_nxp_pxp_blit_cover(tmp_buf, &tmp_area, dest_w, src_buf, src_area, dsc, cf);
if(res != LV_RES_OK) {
PXP_LOG_TRACE("Blit cover with full opacity failed.");
lv_mem_buf_release(tmp_buf);
return res;
}
/*Step 2: Blit temporary results with required opacity to output*/
res = lv_gpu_nxp_pxp_blit_cf(dest_buf, dest_area, dest_stride, tmp_buf, &tmp_area, dsc, cf);
/*Clean-up memory*/
lv_mem_buf_release(tmp_buf);
return res;
}
static lv_res_t lv_gpu_nxp_pxp_blit_cover(lv_color_t * dest_buf, const lv_area_t * dest_area,
lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf)
{
lv_coord_t dest_w = lv_area_get_width(dest_area);
lv_coord_t dest_h = lv_area_get_height(dest_area);
bool recolor = (dsc->recolor_opa != LV_OPA_TRANSP);
bool rotation = (dsc->angle != 0);
PXP_Init(LV_GPU_NXP_PXP_ID);
PXP_EnableCsc1(LV_GPU_NXP_PXP_ID, false); /*Disable CSC1, it is enabled by default.*/
PXP_SetProcessBlockSize(LV_GPU_NXP_PXP_ID, kPXP_BlockSize16); /*block size 16x16 for higher performance*/
if(rotation) {
/*
* PXP is set to process 16x16 blocks to optimize the system for memory
* bandwidth and image processing time.
* The output engine essentially truncates any output pixels after the
* desired number of pixels has been written.
* When rotating a source image and the output is not divisible by the block
* size, the incorrect pixels could be truncated and the final output image
* can look shifted.
*/
if(lv_area_get_width(src_area) % 16 || lv_area_get_height(src_area) % 16) {
PXP_LOG_TRACE("Rotation is not supported for image w/o alignment to block size 16x16.");
return LV_RES_INV;
}
/*Convert rotation angle*/
pxp_rotate_degree_t pxp_rot;
switch(dsc->angle) {
case 0:
pxp_rot = kPXP_Rotate0;
break;
case 900:
pxp_rot = kPXP_Rotate90;
break;
case 1800:
pxp_rot = kPXP_Rotate180;
break;
case 2700:
pxp_rot = kPXP_Rotate270;
break;
default:
PXP_LOG_TRACE("Rotation angle %d is not supported. PXP can rotate only 90x angle.", dsc->angle);
return LV_RES_INV;
}
PXP_SetRotateConfig(LV_GPU_NXP_PXP_ID, kPXP_RotateOutputBuffer, pxp_rot, kPXP_FlipDisable);
}
lv_coord_t src_stride = lv_area_get_width(src_area);
/*AS buffer - source image*/
pxp_as_buffer_config_t asBufferConfig = {
.pixelFormat = PXP_AS_PIXEL_FORMAT,
.bufferAddr = (uint32_t)src_buf,
.pitchBytes = src_stride * sizeof(lv_color_t)
};
PXP_SetAlphaSurfaceBufferConfig(LV_GPU_NXP_PXP_ID, &asBufferConfig);
PXP_SetAlphaSurfacePosition(LV_GPU_NXP_PXP_ID, 0U, 0U, dest_w - 1U, dest_h - 1U);
/*Disable PS buffer*/
PXP_SetProcessSurfacePosition(LV_GPU_NXP_PXP_ID, 0xFFFFU, 0xFFFFU, 0U, 0U);
if(recolor)
/*Use as color generator*/
PXP_SetProcessSurfaceBackGroundColor(LV_GPU_NXP_PXP_ID, lv_color_to32(dsc->recolor));
/*Output buffer*/
pxp_output_buffer_config_t outputBufferConfig = {
.pixelFormat = (pxp_output_pixel_format_t)PXP_OUT_PIXEL_FORMAT,
.interlacedMode = kPXP_OutputProgressive,
.buffer0Addr = (uint32_t)(dest_buf + dest_stride * dest_area->y1 + dest_area->x1),
.buffer1Addr = (uint32_t)0U,
.pitchBytes = dest_stride * sizeof(lv_color_t),
.width = dest_w,
.height = dest_h
};
PXP_SetOutputBufferConfig(LV_GPU_NXP_PXP_ID, &outputBufferConfig);
if(recolor || lv_img_cf_has_alpha(cf)) {
/**
* Configure Porter-Duff blending.
*
* Note: srcFactorMode and dstFactorMode are inverted in fsl_pxp.h:
* srcFactorMode is actually applied on PS alpha value
* dstFactorMode is actually applied on AS alpha value
*/
pxp_porter_duff_config_t pdConfig = {
.enable = 1,
.dstColorMode = kPXP_PorterDuffColorWithAlpha,
.srcColorMode = kPXP_PorterDuffColorNoAlpha,
.dstGlobalAlphaMode = kPXP_PorterDuffGlobalAlpha,
.srcGlobalAlphaMode = lv_img_cf_has_alpha(cf) ? kPXP_PorterDuffLocalAlpha : kPXP_PorterDuffGlobalAlpha,
.dstFactorMode = kPXP_PorterDuffFactorStraight,
.srcFactorMode = kPXP_PorterDuffFactorInversed,
.dstGlobalAlpha = recolor ? dsc->recolor_opa : 0x00,
.srcGlobalAlpha = 0xff,
.dstAlphaMode = kPXP_PorterDuffAlphaStraight, /*don't care*/
.srcAlphaMode = kPXP_PorterDuffAlphaStraight
};
PXP_SetPorterDuffConfig(LV_GPU_NXP_PXP_ID, &pdConfig);
}
lv_gpu_nxp_pxp_run(); /*Start PXP task*/
return LV_RES_OK;
}
static lv_res_t lv_gpu_nxp_pxp_blit_cf(lv_color_t * dest_buf, const lv_area_t * dest_area,
lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area,
const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf)
{
lv_coord_t dest_w = lv_area_get_width(dest_area);
lv_coord_t dest_h = lv_area_get_height(dest_area);
PXP_Init(LV_GPU_NXP_PXP_ID);
PXP_EnableCsc1(LV_GPU_NXP_PXP_ID, false); /*Disable CSC1, it is enabled by default.*/
PXP_SetProcessBlockSize(LV_GPU_NXP_PXP_ID, kPXP_BlockSize16); /*block size 16x16 for higher performance*/
pxp_as_blend_config_t asBlendConfig = {
.alpha = dsc->opa,
.invertAlpha = false,
.alphaMode = kPXP_AlphaRop,
.ropMode = kPXP_RopMergeAs
};
if(dsc->opa >= (lv_opa_t)LV_OPA_MAX && !lv_img_cf_is_chroma_keyed(cf) && !lv_img_cf_has_alpha(cf)) {
/*Simple blit, no effect - Disable PS buffer*/
PXP_SetProcessSurfacePosition(LV_GPU_NXP_PXP_ID, 0xFFFFU, 0xFFFFU, 0U, 0U);
}
else {
/*PS must be enabled to fetch background pixels.
PS and OUT buffers are the same, blend will be done in-place*/
pxp_ps_buffer_config_t psBufferConfig = {
.pixelFormat = PXP_PS_PIXEL_FORMAT,
.swapByte = false,
.bufferAddr = (uint32_t)(dest_buf + dest_stride * dest_area->y1 + dest_area->x1),
.bufferAddrU = 0U,
.bufferAddrV = 0U,
.pitchBytes = dest_stride * sizeof(lv_color_t)
};
if(dsc->opa >= (lv_opa_t)LV_OPA_MAX) {
asBlendConfig.alphaMode = lv_img_cf_has_alpha(cf) ? kPXP_AlphaEmbedded : kPXP_AlphaOverride;
}
else {
asBlendConfig.alphaMode = lv_img_cf_has_alpha(cf) ? kPXP_AlphaMultiply : kPXP_AlphaOverride;
}
PXP_SetProcessSurfaceBufferConfig(LV_GPU_NXP_PXP_ID, &psBufferConfig);
PXP_SetProcessSurfacePosition(LV_GPU_NXP_PXP_ID, 0U, 0U, dest_w - 1, dest_h - 1);
}
lv_coord_t src_stride = lv_area_get_width(src_area);
/*AS buffer - source image*/
pxp_as_buffer_config_t asBufferConfig = {
.pixelFormat = PXP_AS_PIXEL_FORMAT,
.bufferAddr = (uint32_t)src_buf,
.pitchBytes = src_stride * sizeof(lv_color_t)
};
PXP_SetAlphaSurfaceBufferConfig(LV_GPU_NXP_PXP_ID, &asBufferConfig);
PXP_SetAlphaSurfacePosition(LV_GPU_NXP_PXP_ID, 0U, 0U, dest_w - 1U, dest_h - 1U);
PXP_SetAlphaSurfaceBlendConfig(LV_GPU_NXP_PXP_ID, &asBlendConfig);
if(lv_img_cf_is_chroma_keyed(cf)) {
lv_color_t colorKeyLow = LV_COLOR_CHROMA_KEY;
lv_color_t colorKeyHigh = LV_COLOR_CHROMA_KEY;
bool recolor = (dsc->recolor_opa != LV_OPA_TRANSP);
if(recolor) {
/* New color key after recoloring */
lv_color_t colorKey = lv_color_mix(dsc->recolor, LV_COLOR_CHROMA_KEY, dsc->recolor_opa);
LV_COLOR_SET_R(colorKeyLow, colorKey.ch.red != 0 ? colorKey.ch.red - 1 : 0);
LV_COLOR_SET_G(colorKeyLow, colorKey.ch.green != 0 ? colorKey.ch.green - 1 : 0);
LV_COLOR_SET_B(colorKeyLow, colorKey.ch.blue != 0 ? colorKey.ch.blue - 1 : 0);
#if LV_COLOR_DEPTH==16
LV_COLOR_SET_R(colorKeyHigh, colorKey.ch.red != 0x1f ? colorKey.ch.red + 1 : 0x1f);
LV_COLOR_SET_G(colorKeyHigh, colorKey.ch.green != 0x3f ? colorKey.ch.green + 1 : 0x3f);
LV_COLOR_SET_B(colorKeyHigh, colorKey.ch.blue != 0x1f ? colorKey.ch.blue + 1 : 0x1f);
#else /*LV_COLOR_DEPTH==32*/
LV_COLOR_SET_R(colorKeyHigh, colorKey.ch.red != 0xff ? colorKey.ch.red + 1 : 0xff);
LV_COLOR_SET_G(colorKeyHigh, colorKey.ch.green != 0xff ? colorKey.ch.green + 1 : 0xff);
LV_COLOR_SET_B(colorKeyHigh, colorKey.ch.blue != 0xff ? colorKey.ch.blue + 1 : 0xff);
#endif
}
PXP_SetAlphaSurfaceOverlayColorKey(LV_GPU_NXP_PXP_ID, lv_color_to32(colorKeyLow),
lv_color_to32(colorKeyHigh));
}
PXP_EnableAlphaSurfaceOverlayColorKey(LV_GPU_NXP_PXP_ID, lv_img_cf_is_chroma_keyed(cf));
/*Output buffer.*/
pxp_output_buffer_config_t outputBufferConfig = {
.pixelFormat = (pxp_output_pixel_format_t)PXP_OUT_PIXEL_FORMAT,
.interlacedMode = kPXP_OutputProgressive,
.buffer0Addr = (uint32_t)(dest_buf + dest_stride * dest_area->y1 + dest_area->x1),
.buffer1Addr = (uint32_t)0U,
.pitchBytes = dest_stride * sizeof(lv_color_t),
.width = dest_w,
.height = dest_h
};
PXP_SetOutputBufferConfig(LV_GPU_NXP_PXP_ID, &outputBufferConfig);
lv_gpu_nxp_pxp_run(); /* Start PXP task */
return LV_RES_OK;
}
#endif /*LV_USE_GPU_NXP_PXP*/

View File

@@ -0,0 +1,143 @@
/**
* @file lv_draw_pxp_blend.h
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_DRAW_PXP_BLEND_H
#define LV_DRAW_PXP_BLEND_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_PXP
#include "lv_gpu_nxp_pxp.h"
#include "../../sw/lv_draw_sw.h"
/*********************
* DEFINES
*********************/
#ifndef LV_GPU_NXP_PXP_BLIT_SIZE_LIMIT
/** Minimum area (in pixels) for image copy with 100% opacity to be handled by PXP*/
#define LV_GPU_NXP_PXP_BLIT_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_PXP_BLIT_OPA_SIZE_LIMIT
/** Minimum area (in pixels) for image copy with transparency to be handled by PXP*/
#define LV_GPU_NXP_PXP_BLIT_OPA_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_PXP_BUFF_SYNC_BLIT_SIZE_LIMIT
/** Minimum invalidated area (in pixels) to be synchronized by PXP during buffer sync */
#define LV_GPU_NXP_PXP_BUFF_SYNC_BLIT_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_PXP_FILL_SIZE_LIMIT
/** Minimum area (in pixels) to be filled by PXP with 100% opacity*/
#define LV_GPU_NXP_PXP_FILL_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_PXP_FILL_OPA_SIZE_LIMIT
/** Minimum area (in pixels) to be filled by PXP with transparency*/
#define LV_GPU_NXP_PXP_FILL_OPA_SIZE_LIMIT 5000
#endif
/**********************
* TYPEDEFS
**********************/
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* Fill area, with optional opacity.
*
* @param[in/out] dest_buf destination buffer
* @param[in] dest_stride width (stride) of destination buffer in pixels
* @param[in] fill_area area to fill
* @param[in] color color
* @param[in] opa transparency of the color
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_pxp_fill(lv_color_t * dest_buf, lv_coord_t dest_stride, const lv_area_t * fill_area,
lv_color_t color, lv_opa_t opa);
/**
* BLock Image Transfer - copy rectangular image from src_buf to dst_buf with effects.
* By default, image is copied directly, with optional opacity. This function can also
* rotate the display output buffer to a specified angle (90x step).
*
* @param[in/out] dest_buf destination buffer
* @param[in] dest_area destination area
* @param[in] dest_stride width (stride) of destination buffer in pixels
* @param[in] src_buf source buffer
* @param[in] src_area source area with absolute coordinates to draw on destination buffer
* @param[in] opa opacity of the result
* @param[in] angle display rotation angle (90x)
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_pxp_blit(lv_color_t * dest_buf, const lv_area_t * dest_area, lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area, lv_opa_t opa, lv_disp_rot_t angle);
/**
* BLock Image Transfer - copy rectangular image from src_buf to dst_buf with transformation.
*
*
* @param[in/out] dest_buf destination buffer
* @param[in] dest_area destination area
* @param[in] dest_stride width (stride) of destination buffer in pixels
* @param[in] src_buf source buffer
* @param[in] src_area source area with absolute coordinates to draw on destination buffer
* @param[in] dsc image descriptor
* @param[in] cf color format
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_pxp_blit_transform(lv_color_t * dest_buf, const lv_area_t * dest_area, lv_coord_t dest_stride,
const lv_color_t * src_buf, const lv_area_t * src_area, const lv_draw_img_dsc_t * dsc, lv_img_cf_t cf);
/**********************
* MACROS
**********************/
#endif /*LV_USE_GPU_NXP_PXP*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_DRAW_PXP_BLEND_H*/

View File

@@ -0,0 +1,116 @@
/**
* @file lv_gpu_nxp_pxp.c
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_gpu_nxp_pxp.h"
#if LV_USE_GPU_NXP_PXP
#include "lv_gpu_nxp_pxp_osa.h"
#include "../../../core/lv_refr.h"
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
/**
* Clean & invalidate cache.
*/
static void invalidate_cache(void);
/**********************
* STATIC VARIABLES
**********************/
static lv_nxp_pxp_cfg_t * pxp_cfg;
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
lv_res_t lv_gpu_nxp_pxp_init(void)
{
pxp_cfg = lv_gpu_nxp_pxp_get_cfg();
if(!pxp_cfg || !pxp_cfg->pxp_interrupt_deinit || !pxp_cfg->pxp_interrupt_init || !pxp_cfg->pxp_run)
PXP_RETURN_INV("PXP configuration error.");
PXP_Init(LV_GPU_NXP_PXP_ID);
PXP_EnableCsc1(LV_GPU_NXP_PXP_ID, false); /*Disable CSC1, it is enabled by default.*/
PXP_EnableInterrupts(LV_GPU_NXP_PXP_ID, kPXP_CompleteInterruptEnable);
if(pxp_cfg->pxp_interrupt_init() != LV_RES_OK) {
PXP_Deinit(LV_GPU_NXP_PXP_ID);
PXP_RETURN_INV("PXP interrupt init failed.");
}
return LV_RES_OK;
}
void lv_gpu_nxp_pxp_deinit(void)
{
pxp_cfg->pxp_interrupt_deinit();
PXP_DisableInterrupts(PXP, kPXP_CompleteInterruptEnable);
PXP_Deinit(LV_GPU_NXP_PXP_ID);
}
void lv_gpu_nxp_pxp_run(void)
{
/*Clean & invalidate cache*/
invalidate_cache();
pxp_cfg->pxp_run();
}
/**********************
* STATIC FUNCTIONS
**********************/
static void invalidate_cache(void)
{
lv_disp_t * disp = _lv_refr_get_disp_refreshing();
if(disp->driver->clean_dcache_cb)
disp->driver->clean_dcache_cb(disp->driver);
}
#endif /*LV_USE_GPU_NXP_PXP*/

View File

@@ -0,0 +1,153 @@
/**
* @file lv_gpu_nxp_pxp.h
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_GPU_NXP_PXP_H
#define LV_GPU_NXP_PXP_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_PXP
#include "fsl_cache.h"
#include "fsl_pxp.h"
#include "../../../misc/lv_log.h"
/*********************
* DEFINES
*********************/
/** PXP module instance to use*/
#define LV_GPU_NXP_PXP_ID PXP
/** PXP interrupt line ID*/
#define LV_GPU_NXP_PXP_IRQ_ID PXP_IRQn
#ifndef LV_GPU_NXP_PXP_LOG_ERRORS
/** Enable logging of PXP errors (\see LV_LOG_ERROR)*/
#define LV_GPU_NXP_PXP_LOG_ERRORS 1
#endif
#ifndef LV_GPU_NXP_PXP_LOG_TRACES
/** Enable logging of PXP errors (\see LV_LOG_ERROR)*/
#define LV_GPU_NXP_PXP_LOG_TRACES 0
#endif
/**********************
* TYPEDEFS
**********************/
/**
* NXP PXP device configuration - call-backs used for
* interrupt init/wait/deinit.
*/
typedef struct {
/** Callback for PXP interrupt initialization*/
lv_res_t (*pxp_interrupt_init)(void);
/** Callback for PXP interrupt de-initialization*/
void (*pxp_interrupt_deinit)(void);
/** Callback that should start PXP and wait for operation complete*/
void (*pxp_run)(void);
} lv_nxp_pxp_cfg_t;
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* Reset and initialize PXP device. This function should be called as a part
* of display init sequence.
*
* @retval LV_RES_OK PXP init completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_PXP_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_pxp_init(void);
/**
* Disable PXP device. Should be called during display deinit sequence.
*/
void lv_gpu_nxp_pxp_deinit(void);
/**
* Start PXP job and wait for completion.
*/
void lv_gpu_nxp_pxp_run(void);
/**********************
* MACROS
**********************/
#define PXP_COND_STOP(cond, txt) \
do { \
if (cond) { \
LV_LOG_ERROR("%s. STOP!", txt); \
for ( ; ; ); \
} \
} while(0)
#if LV_GPU_NXP_PXP_LOG_ERRORS
#define PXP_RETURN_INV(fmt, ...) \
do { \
LV_LOG_ERROR(fmt, ##__VA_ARGS__); \
return LV_RES_INV; \
} while (0)
#else
#define PXP_RETURN_INV(fmt, ...) \
do { \
return LV_RES_INV; \
}while(0)
#endif /*LV_GPU_NXP_PXP_LOG_ERRORS*/
#if LV_GPU_NXP_PXP_LOG_TRACES
#define PXP_LOG_TRACE(fmt, ...) \
do { \
LV_LOG_ERROR(fmt, ##__VA_ARGS__); \
} while (0)
#else
#define PXP_LOG_TRACE(fmt, ...) \
do { \
} while (0)
#endif /*LV_GPU_NXP_PXP_LOG_TRACES*/
#endif /*LV_USE_GPU_NXP_PXP*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_GPU_NXP_PXP_H*/

View File

@@ -0,0 +1,164 @@
/**
* @file lv_gpu_nxp_pxp_osa.c
*
*/
/**
* MIT License
*
* Copyright 2020, 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_gpu_nxp_pxp_osa.h"
#if LV_USE_GPU_NXP_PXP && LV_USE_GPU_NXP_PXP_AUTO_INIT
#include "../../../misc/lv_log.h"
#include "fsl_pxp.h"
#if defined(SDK_OS_FREE_RTOS)
#include "FreeRTOS.h"
#include "semphr.h"
#endif
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
/**
* PXP interrupt initialization.
*/
static lv_res_t _lv_gpu_nxp_pxp_interrupt_init(void);
/**
* PXP interrupt de-initialization.
*/
static void _lv_gpu_nxp_pxp_interrupt_deinit(void);
/**
* Start the PXP job and wait for task completion.
*/
static void _lv_gpu_nxp_pxp_run(void);
/**********************
* STATIC VARIABLES
**********************/
#if defined(SDK_OS_FREE_RTOS)
static SemaphoreHandle_t s_pxpIdle;
#else
static volatile bool s_pxpIdle;
#endif
static lv_nxp_pxp_cfg_t pxp_default_cfg = {
.pxp_interrupt_init = _lv_gpu_nxp_pxp_interrupt_init,
.pxp_interrupt_deinit = _lv_gpu_nxp_pxp_interrupt_deinit,
.pxp_run = _lv_gpu_nxp_pxp_run
};
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
void PXP_IRQHandler(void)
{
#if defined(SDK_OS_FREE_RTOS)
BaseType_t taskAwake = pdFALSE;
#endif
if(kPXP_CompleteFlag & PXP_GetStatusFlags(LV_GPU_NXP_PXP_ID)) {
PXP_ClearStatusFlags(LV_GPU_NXP_PXP_ID, kPXP_CompleteFlag);
#if defined(SDK_OS_FREE_RTOS)
xSemaphoreGiveFromISR(s_pxpIdle, &taskAwake);
portYIELD_FROM_ISR(taskAwake);
#else
s_pxpIdle = true;
#endif
}
}
lv_nxp_pxp_cfg_t * lv_gpu_nxp_pxp_get_cfg(void)
{
return &pxp_default_cfg;
}
/**********************
* STATIC FUNCTIONS
**********************/
static lv_res_t _lv_gpu_nxp_pxp_interrupt_init(void)
{
#if defined(SDK_OS_FREE_RTOS)
s_pxpIdle = xSemaphoreCreateBinary();
if(s_pxpIdle == NULL)
return LV_RES_INV;
NVIC_SetPriority(LV_GPU_NXP_PXP_IRQ_ID, configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY + 1);
#else
s_pxpIdle = true;
#endif
NVIC_EnableIRQ(LV_GPU_NXP_PXP_IRQ_ID);
return LV_RES_OK;
}
static void _lv_gpu_nxp_pxp_interrupt_deinit(void)
{
NVIC_DisableIRQ(LV_GPU_NXP_PXP_IRQ_ID);
#if defined(SDK_OS_FREE_RTOS)
vSemaphoreDelete(s_pxpIdle);
#endif
}
static void _lv_gpu_nxp_pxp_run(void)
{
#if !defined(SDK_OS_FREE_RTOS)
s_pxpIdle = false;
#endif
PXP_EnableInterrupts(LV_GPU_NXP_PXP_ID, kPXP_CompleteInterruptEnable);
PXP_Start(LV_GPU_NXP_PXP_ID);
#if defined(SDK_OS_FREE_RTOS)
PXP_COND_STOP(!xSemaphoreTake(s_pxpIdle, portMAX_DELAY), "xSemaphoreTake failed.");
#else
while(s_pxpIdle == false) {
}
#endif
}
#endif /*LV_USE_GPU_NXP_PXP && LV_USE_GPU_NXP_PXP_AUTO_INIT*/

View File

@@ -0,0 +1,78 @@
/**
* @file lv_gpu_nxp_pxp_osa.h
*
*/
/**
* MIT License
*
* Copyright 2020, 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_GPU_NXP_PXP_OSA_H
#define LV_GPU_NXP_PXP_OSA_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_PXP && LV_USE_GPU_NXP_PXP_AUTO_INIT
#include "lv_gpu_nxp_pxp.h"
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* PXP device interrupt handler. Used to check PXP task completion status.
*/
void PXP_IRQHandler(void);
/**
* Helper function to get the PXP default configuration.
*/
lv_nxp_pxp_cfg_t * lv_gpu_nxp_pxp_get_cfg(void);
/**********************
* MACROS
**********************/
#endif /*LV_USE_GPU_NXP_PXP && LV_USE_GPU_NXP_PXP_AUTO_INIT*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_GPU_NXP_PXP_OSA_H*/

View File

@@ -0,0 +1,9 @@
CSRCS += lv_draw_vglite_arc.c
CSRCS += lv_draw_vglite_blend.c
CSRCS += lv_draw_vglite_rect.c
CSRCS += lv_gpu_nxp_vglite.c
DEPPATH += --dep-path $(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/vglite
VPATH += :$(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/vglite
CFLAGS += "-I$(LVGL_DIR)/$(LVGL_DIR_NAME)/src/draw/nxp/vglite"

View File

@@ -0,0 +1,699 @@
/**
* @file lv_draw_vglite_arc.c
*
*/
/**
* MIT License
*
* Copyright 2021, 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_draw_vglite_arc.h"
#if LV_USE_GPU_NXP_VG_LITE
#include "math.h"
/*********************
* DEFINES
*********************/
#define T_FRACTION 16384.0f
#define DICHOTO_ITER 5
static const uint16_t TperDegree[90] = {
0, 174, 348, 522, 697, 873, 1049, 1226, 1403, 1581,
1759, 1938, 2117, 2297, 2477, 2658, 2839, 3020, 3202, 3384,
3567, 3749, 3933, 4116, 4300, 4484, 4668, 4852, 5037, 5222,
5407, 5592, 5777, 5962, 6148, 6334, 6519, 6705, 6891, 7077,
7264, 7450, 7636, 7822, 8008, 8193, 8378, 8564, 8750, 8936,
9122, 9309, 9495, 9681, 9867, 10052, 10238, 10424, 10609, 10794,
10979, 11164, 11349, 11534, 11718, 11902, 12086, 12270, 12453, 12637,
12819, 13002, 13184, 13366, 13547, 13728, 13909, 14089, 14269, 14448,
14627, 14805, 14983, 15160, 15337, 15513, 15689, 15864, 16038, 16212
};
/**********************
* TYPEDEFS
**********************/
/* intermediate arc params */
typedef struct _vg_arc {
int32_t angle; /* angle <90deg */
int32_t quarter; /* 0-3 counter-clockwise */
int32_t rad; /* radius */
int32_t p0x; /* point P0 */
int32_t p0y;
int32_t p1x; /* point P1 */
int32_t p1y;
int32_t p2x; /* point P2 */
int32_t p2y;
int32_t p3x; /* point P3 */
int32_t p3y;
} vg_arc;
typedef struct _cubic_cont_pt {
float p0;
float p1;
float p2;
float p3;
} cubic_cont_pt;
/**********************
* STATIC PROTOTYPES
**********************/
static void rotate_point(int32_t angle, int32_t * x, int32_t * y);
static void add_arc_path(int32_t * arc_path, int * pidx, int32_t radius,
int32_t start_angle, int32_t end_angle, lv_point_t center, bool cw);
/**********************
* STATIC VARIABLES
**********************/
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
lv_res_t lv_gpu_nxp_vglite_draw_arc(lv_draw_ctx_t * draw_ctx, const lv_draw_arc_dsc_t * dsc, const lv_point_t * center,
int32_t radius, int32_t start_angle, int32_t end_angle)
{
vg_lite_buffer_t vgbuf;
vg_lite_error_t err = VG_LITE_SUCCESS;
lv_color32_t col32 = {.full = lv_color_to32(dsc->color)}; /*Convert color to RGBA8888*/
lv_coord_t dest_width = lv_area_get_width(draw_ctx->buf_area);
lv_coord_t dest_height = lv_area_get_height(draw_ctx->buf_area);
vg_lite_path_t path;
vg_lite_color_t vgcol; /* vglite takes ABGR */
vg_lite_matrix_t matrix;
lv_opa_t opa = dsc->opa;
bool donut = ((end_angle - start_angle) % 360 == 0) ? true : false;
lv_point_t clip_center = {center->x - draw_ctx->buf_area->x1, center->y - draw_ctx->buf_area->y1};
/* path: max size = 16 cubic bezier (7 words each) */
int32_t arc_path[16 * 7];
lv_memset_00(arc_path, sizeof(arc_path));
/*** Init destination buffer ***/
if(lv_vglite_init_buf(&vgbuf, (uint32_t)dest_width, (uint32_t)dest_height, (uint32_t)dest_width * sizeof(lv_color_t),
(const lv_color_t *)draw_ctx->buf, false) != LV_RES_OK)
VG_LITE_RETURN_INV("Init buffer failed.");
/*** Init path ***/
lv_coord_t width = dsc->width; /* inner arc radius = outer arc radius - width */
if(width > (lv_coord_t)radius)
width = radius;
int pidx = 0;
int32_t cp_x, cp_y; /* control point coords */
/* first control point of curve */
cp_x = radius;
cp_y = 0;
rotate_point(start_angle, &cp_x, &cp_y);
arc_path[pidx++] = VLC_OP_MOVE;
arc_path[pidx++] = clip_center.x + cp_x;
arc_path[pidx++] = clip_center.y + cp_y;
/* draw 1-5 outer quarters */
add_arc_path(arc_path, &pidx, radius, start_angle, end_angle, clip_center, true);
if(donut) {
/* close outer circle */
cp_x = radius;
cp_y = 0;
rotate_point(start_angle, &cp_x, &cp_y);
arc_path[pidx++] = VLC_OP_LINE;
arc_path[pidx++] = clip_center.x + cp_x;
arc_path[pidx++] = clip_center.y + cp_y;
/* start inner circle */
cp_x = radius - width;
cp_y = 0;
rotate_point(start_angle, &cp_x, &cp_y);
arc_path[pidx++] = VLC_OP_MOVE;
arc_path[pidx++] = clip_center.x + cp_x;
arc_path[pidx++] = clip_center.y + cp_y;
}
else if(dsc->rounded != 0U) { /* 1st rounded arc ending */
cp_x = radius - width / 2;
cp_y = 0;
rotate_point(end_angle, &cp_x, &cp_y);
lv_point_t round_center = {clip_center.x + cp_x, clip_center.y + cp_y};
add_arc_path(arc_path, &pidx, width / 2, end_angle, (end_angle + 180),
round_center, true);
}
else { /* 1st flat ending */
cp_x = radius - width;
cp_y = 0;
rotate_point(end_angle, &cp_x, &cp_y);
arc_path[pidx++] = VLC_OP_LINE;
arc_path[pidx++] = clip_center.x + cp_x;
arc_path[pidx++] = clip_center.y + cp_y;
}
/* draw 1-5 inner quarters */
add_arc_path(arc_path, &pidx, radius - width, start_angle, end_angle, clip_center, false);
/* last control point of curve */
if(donut) { /* close the loop */
cp_x = radius - width;
cp_y = 0;
rotate_point(start_angle, &cp_x, &cp_y);
arc_path[pidx++] = VLC_OP_LINE;
arc_path[pidx++] = clip_center.x + cp_x;
arc_path[pidx++] = clip_center.y + cp_y;
}
else if(dsc->rounded != 0U) { /* 2nd rounded arc ending */
cp_x = radius - width / 2;
cp_y = 0;
rotate_point(start_angle, &cp_x, &cp_y);
lv_point_t round_center = {clip_center.x + cp_x, clip_center.y + cp_y};
add_arc_path(arc_path, &pidx, width / 2, (start_angle + 180), (start_angle + 360),
round_center, true);
}
else { /* 2nd flat ending */
cp_x = radius;
cp_y = 0;
rotate_point(start_angle, &cp_x, &cp_y);
arc_path[pidx++] = VLC_OP_LINE;
arc_path[pidx++] = clip_center.x + cp_x;
arc_path[pidx++] = clip_center.y + cp_y;
}
arc_path[pidx++] = VLC_OP_END;
err = vg_lite_init_path(&path, VG_LITE_S32, VG_LITE_HIGH, (uint32_t)pidx * sizeof(int32_t), arc_path,
(vg_lite_float_t) draw_ctx->clip_area->x1, (vg_lite_float_t) draw_ctx->clip_area->y1,
((vg_lite_float_t) draw_ctx->clip_area->x2) + 1.0f, ((vg_lite_float_t) draw_ctx->clip_area->y2) + 1.0f);
VG_LITE_ERR_RETURN_INV(err, "Init path failed.");
/* set rotation angle */
vg_lite_identity(&matrix);
if(opa <= (lv_opa_t)LV_OPA_MAX) {
/* Only pre-multiply color if hardware pre-multiplication is not present */
if(!vg_lite_query_feature(gcFEATURE_BIT_VG_PE_PREMULTIPLY)) {
col32.ch.red = (uint8_t)(((uint16_t)col32.ch.red * opa) >> 8);
col32.ch.green = (uint8_t)(((uint16_t)col32.ch.green * opa) >> 8);
col32.ch.blue = (uint8_t)(((uint16_t)col32.ch.blue * opa) >> 8);
}
col32.ch.alpha = opa;
}
#if LV_COLOR_DEPTH==16
vgcol = col32.full;
#else /*LV_COLOR_DEPTH==32*/
vgcol = ((uint32_t)col32.ch.alpha << 24) | ((uint32_t)col32.ch.blue << 16) | ((uint32_t)col32.ch.green << 8) |
(uint32_t)col32.ch.red;
#endif
/*Clean & invalidate cache*/
lv_vglite_invalidate_cache();
/*** Draw arc ***/
err = vg_lite_draw(&vgbuf, &path, VG_LITE_FILL_NON_ZERO, &matrix, VG_LITE_BLEND_SRC_OVER, vgcol);
VG_LITE_ERR_RETURN_INV(err, "Draw arc failed.");
err = vg_lite_finish();
VG_LITE_ERR_RETURN_INV(err, "Finish failed.");
err = vg_lite_clear_path(&path);
VG_LITE_ERR_RETURN_INV(err, "Clear path failed.");
return LV_RES_OK;
}
/**********************
* STATIC FUNCTIONS
**********************/
static void copy_arc(vg_arc * dst, vg_arc * src)
{
dst->quarter = src->quarter;
dst->rad = src->rad;
dst->angle = src->angle;
dst->p0x = src->p0x;
dst->p1x = src->p1x;
dst->p2x = src->p2x;
dst->p3x = src->p3x;
dst->p0y = src->p0y;
dst->p1y = src->p1y;
dst->p2y = src->p2y;
dst->p3y = src->p3y;
}
/**
* Rotate the point according given rotation angle rotation center is 0,0
*/
static void rotate_point(int32_t angle, int32_t * x, int32_t * y)
{
int32_t ori_x = *x;
int32_t ori_y = *y;
int16_t alpha = (int16_t)angle;
*x = ((lv_trigo_cos(alpha) * ori_x) / LV_TRIGO_SIN_MAX) - ((lv_trigo_sin(alpha) * ori_y) / LV_TRIGO_SIN_MAX);
*y = ((lv_trigo_sin(alpha) * ori_x) / LV_TRIGO_SIN_MAX) + ((lv_trigo_cos(alpha) * ori_y) / LV_TRIGO_SIN_MAX);
}
/**
* Set full arc control points depending on quarter.
* Control points match the best approximation of a circle.
* Arc Quarter position is:
* Q2 | Q3
* ---+---
* Q1 | Q0
*/
static void set_full_arc(vg_arc * fullarc)
{
/* the tangent lenght for the bezier circle approx */
float tang = ((float)fullarc->rad) * BEZIER_OPTIM_CIRCLE;
switch(fullarc->quarter) {
case 0:
/* first quarter */
fullarc->p0x = fullarc->rad;
fullarc->p0y = 0;
fullarc->p1x = fullarc->rad;
fullarc->p1y = (int32_t)tang;
fullarc->p2x = (int32_t)tang;
fullarc->p2y = fullarc->rad;
fullarc->p3x = 0;
fullarc->p3y = fullarc->rad;
break;
case 1:
/* second quarter */
fullarc->p0x = 0;
fullarc->p0y = fullarc->rad;
fullarc->p1x = 0 - (int32_t)tang;
fullarc->p1y = fullarc->rad;
fullarc->p2x = 0 - fullarc->rad;
fullarc->p2y = (int32_t)tang;
fullarc->p3x = 0 - fullarc->rad;
fullarc->p3y = 0;
break;
case 2:
/* third quarter */
fullarc->p0x = 0 - fullarc->rad;
fullarc->p0y = 0;
fullarc->p1x = 0 - fullarc->rad;
fullarc->p1y = 0 - (int32_t)tang;
fullarc->p2x = 0 - (int32_t)tang;
fullarc->p2y = 0 - fullarc->rad;
fullarc->p3x = 0;
fullarc->p3y = 0 - fullarc->rad;
break;
case 3:
/* fourth quarter */
fullarc->p0x = 0;
fullarc->p0y = 0 - fullarc->rad;
fullarc->p1x = (int32_t)tang;
fullarc->p1y = 0 - fullarc->rad;
fullarc->p2x = fullarc->rad;
fullarc->p2y = 0 - (int32_t)tang;
fullarc->p3x = fullarc->rad;
fullarc->p3y = 0;
break;
default:
LV_LOG_ERROR("Invalid arc quarter value.");
break;
}
}
/**
* Linear interpolation between two points 'a' and 'b'
* 't' parameter is the proportion ratio expressed in range [0 ; T_FRACTION ]
*/
static inline float lerp(float coord_a, float coord_b, uint16_t t)
{
float tf = (float)t;
return ((T_FRACTION - tf) * coord_a + tf * coord_b) / T_FRACTION;
}
/**
* Computes a point of bezier curve given 't' param
*/
static inline float comp_bezier_point(float t, cubic_cont_pt cp)
{
float t_sq = t * t;
float inv_t_sq = (1.0f - t) * (1.0f - t);
float apt = (1.0f - t) * inv_t_sq * cp.p0 + 3.0f * inv_t_sq * t * cp.p1 + 3.0f * (1.0f - t) * t_sq * cp.p2 + t * t_sq *
cp.p3;
return apt;
}
/**
* Find parameter 't' in curve at point 'pt'
* proceed by dichotomy on only 1 dimension,
* works only if the curve is monotonic
* bezier curve is defined by control points [p0 p1 p2 p3]
* 'dec' tells if curve is decreasing (true) or increasing (false)
*/
static uint16_t get_bez_t_from_pos(float pt, cubic_cont_pt cp, bool dec)
{
/* initialize dichotomy with boundary 't' values */
float t_low = 0.0f;
float t_mid = 0.5f;
float t_hig = 1.0f;
float a_pt;
/* dichotomy loop */
for(int i = 0; i < DICHOTO_ITER; i++) {
a_pt = comp_bezier_point(t_mid, cp);
/* check mid-point position on bezier curve versus targeted point */
if((a_pt > pt) != dec) {
t_hig = t_mid;
}
else {
t_low = t_mid;
}
/* define new 't' param for mid-point */
t_mid = (t_low + t_hig) / 2.0f;
}
/* return parameter 't' in integer range [0 ; T_FRACTION] */
return (uint16_t)floorf(t_mid * T_FRACTION + 0.5f);
}
/**
* Gives relative coords of the control points
* for the sub-arc starting at angle with given angle span
*/
static void get_subarc_control_points(vg_arc * arc, int32_t span)
{
vg_arc fullarc;
fullarc.angle = arc->angle;
fullarc.quarter = arc->quarter;
fullarc.rad = arc->rad;
set_full_arc(&fullarc);
/* special case of full arc */
if(arc->angle == 90) {
copy_arc(arc, &fullarc);
return;
}
/* compute 1st arc using the geometric construction of curve */
uint16_t t2 = TperDegree[arc->angle + span];
/* lerp for A */
float a2x = lerp((float)fullarc.p0x, (float)fullarc.p1x, t2);
float a2y = lerp((float)fullarc.p0y, (float)fullarc.p1y, t2);
/* lerp for B */
float b2x = lerp((float)fullarc.p1x, (float)fullarc.p2x, t2);
float b2y = lerp((float)fullarc.p1y, (float)fullarc.p2y, t2);
/* lerp for C */
float c2x = lerp((float)fullarc.p2x, (float)fullarc.p3x, t2);
float c2y = lerp((float)fullarc.p2y, (float)fullarc.p3y, t2);
/* lerp for D */
float d2x = lerp(a2x, b2x, t2);
float d2y = lerp(a2y, b2y, t2);
/* lerp for E */
float e2x = lerp(b2x, c2x, t2);
float e2y = lerp(b2y, c2y, t2);
float pt2x = lerp(d2x, e2x, t2);
float pt2y = lerp(d2y, e2y, t2);
/* compute sub-arc using the geometric construction of curve */
uint16_t t1 = TperDegree[arc->angle];
/* lerp for A */
float a1x = lerp((float)fullarc.p0x, (float)fullarc.p1x, t1);
float a1y = lerp((float)fullarc.p0y, (float)fullarc.p1y, t1);
/* lerp for B */
float b1x = lerp((float)fullarc.p1x, (float)fullarc.p2x, t1);
float b1y = lerp((float)fullarc.p1y, (float)fullarc.p2y, t1);
/* lerp for C */
float c1x = lerp((float)fullarc.p2x, (float)fullarc.p3x, t1);
float c1y = lerp((float)fullarc.p2y, (float)fullarc.p3y, t1);
/* lerp for D */
float d1x = lerp(a1x, b1x, t1);
float d1y = lerp(a1y, b1y, t1);
/* lerp for E */
float e1x = lerp(b1x, c1x, t1);
float e1y = lerp(b1y, c1y, t1);
float pt1x = lerp(d1x, e1x, t1);
float pt1y = lerp(d1y, e1y, t1);
/* find the 't3' parameter for point P(t1) on the sub-arc [P0 A2 D2 P(t2)] using dichotomy
* use position of x axis only */
uint16_t t3;
t3 = get_bez_t_from_pos(pt1x,
(cubic_cont_pt) {
.p0 = ((float)fullarc.p0x), .p1 = a2x, .p2 = d2x, .p3 = pt2x
},
(bool)(pt2x < (float)fullarc.p0x));
/* lerp for B */
float b3x = lerp(a2x, d2x, t3);
float b3y = lerp(a2y, d2y, t3);
/* lerp for C */
float c3x = lerp(d2x, pt2x, t3);
float c3y = lerp(d2y, pt2y, t3);
/* lerp for E */
float e3x = lerp(b3x, c3x, t3);
float e3y = lerp(b3y, c3y, t3);
arc->p0x = (int32_t)floorf(0.5f + pt1x);
arc->p0y = (int32_t)floorf(0.5f + pt1y);
arc->p1x = (int32_t)floorf(0.5f + e3x);
arc->p1y = (int32_t)floorf(0.5f + e3y);
arc->p2x = (int32_t)floorf(0.5f + c3x);
arc->p2y = (int32_t)floorf(0.5f + c3y);
arc->p3x = (int32_t)floorf(0.5f + pt2x);
arc->p3y = (int32_t)floorf(0.5f + pt2y);
}
/**
* Gives relative coords of the control points
*/
static void get_arc_control_points(vg_arc * arc, bool start)
{
vg_arc fullarc;
fullarc.angle = arc->angle;
fullarc.quarter = arc->quarter;
fullarc.rad = arc->rad;
set_full_arc(&fullarc);
/* special case of full arc */
if(arc->angle == 90) {
copy_arc(arc, &fullarc);
return;
}
/* compute sub-arc using the geometric construction of curve */
uint16_t t = TperDegree[arc->angle];
/* lerp for A */
float ax = lerp((float)fullarc.p0x, (float)fullarc.p1x, t);
float ay = lerp((float)fullarc.p0y, (float)fullarc.p1y, t);
/* lerp for B */
float bx = lerp((float)fullarc.p1x, (float)fullarc.p2x, t);
float by = lerp((float)fullarc.p1y, (float)fullarc.p2y, t);
/* lerp for C */
float cx = lerp((float)fullarc.p2x, (float)fullarc.p3x, t);
float cy = lerp((float)fullarc.p2y, (float)fullarc.p3y, t);
/* lerp for D */
float dx = lerp(ax, bx, t);
float dy = lerp(ay, by, t);
/* lerp for E */
float ex = lerp(bx, cx, t);
float ey = lerp(by, cy, t);
/* sub-arc's control points are tangents of DeCasteljau's algorithm */
if(start) {
arc->p0x = (int32_t)floorf(0.5f + lerp(dx, ex, t));
arc->p0y = (int32_t)floorf(0.5f + lerp(dy, ey, t));
arc->p1x = (int32_t)floorf(0.5f + ex);
arc->p1y = (int32_t)floorf(0.5f + ey);
arc->p2x = (int32_t)floorf(0.5f + cx);
arc->p2y = (int32_t)floorf(0.5f + cy);
arc->p3x = fullarc.p3x;
arc->p3y = fullarc.p3y;
}
else {
arc->p0x = fullarc.p0x;
arc->p0y = fullarc.p0y;
arc->p1x = (int32_t)floorf(0.5f + ax);
arc->p1y = (int32_t)floorf(0.5f + ay);
arc->p2x = (int32_t)floorf(0.5f + dx);
arc->p2y = (int32_t)floorf(0.5f + dy);
arc->p3x = (int32_t)floorf(0.5f + lerp(dx, ex, t));
arc->p3y = (int32_t)floorf(0.5f + lerp(dy, ey, t));
}
}
/**
* Add the arc control points into the path data for vglite,
* taking into account the real center of the arc (translation).
* arc_path: (in/out) the path data array for vglite
* pidx: (in/out) index of last element added in arc_path
* q_arc: (in) the arc data containing control points
* center: (in) the center of the circle in draw coordinates
* cw: (in) true if arc is clockwise
*/
static void add_split_arc_path(int32_t * arc_path, int * pidx, vg_arc * q_arc, lv_point_t center, bool cw)
{
/* assumes first control point already in array arc_path[] */
int idx = *pidx;
if(cw) {
#if BEZIER_DBG_CONTROL_POINTS
arc_path[idx++] = VLC_OP_LINE;
arc_path[idx++] = q_arc->p1x + center.x;
arc_path[idx++] = q_arc->p1y + center.y;
arc_path[idx++] = VLC_OP_LINE;
arc_path[idx++] = q_arc->p2x + center.x;
arc_path[idx++] = q_arc->p2y + center.y;
arc_path[idx++] = VLC_OP_LINE;
arc_path[idx++] = q_arc->p3x + center.x;
arc_path[idx++] = q_arc->p3y + center.y;
#else
arc_path[idx++] = VLC_OP_CUBIC;
arc_path[idx++] = q_arc->p1x + center.x;
arc_path[idx++] = q_arc->p1y + center.y;
arc_path[idx++] = q_arc->p2x + center.x;
arc_path[idx++] = q_arc->p2y + center.y;
arc_path[idx++] = q_arc->p3x + center.x;
arc_path[idx++] = q_arc->p3y + center.y;
#endif
}
else { /* reverse points order when counter-clockwise */
#if BEZIER_DBG_CONTROL_POINTS
arc_path[idx++] = VLC_OP_LINE;
arc_path[idx++] = q_arc->p2x + center.x;
arc_path[idx++] = q_arc->p2y + center.y;
arc_path[idx++] = VLC_OP_LINE;
arc_path[idx++] = q_arc->p1x + center.x;
arc_path[idx++] = q_arc->p1y + center.y;
arc_path[idx++] = VLC_OP_LINE;
arc_path[idx++] = q_arc->p0x + center.x;
arc_path[idx++] = q_arc->p0y + center.y;
#else
arc_path[idx++] = VLC_OP_CUBIC;
arc_path[idx++] = q_arc->p2x + center.x;
arc_path[idx++] = q_arc->p2y + center.y;
arc_path[idx++] = q_arc->p1x + center.x;
arc_path[idx++] = q_arc->p1y + center.y;
arc_path[idx++] = q_arc->p0x + center.x;
arc_path[idx++] = q_arc->p0y + center.y;
#endif
}
/* update index i n path array*/
*pidx = idx;
}
static void add_arc_path(int32_t * arc_path, int * pidx, int32_t radius,
int32_t start_angle, int32_t end_angle, lv_point_t center, bool cw)
{
/* set number of arcs to draw */
vg_arc q_arc;
int32_t start_arc_angle = start_angle % 90;
int32_t end_arc_angle = end_angle % 90;
int32_t inv_start_arc_angle = (start_arc_angle > 0) ? (90 - start_arc_angle) : 0;
int32_t nbarc = (end_angle - start_angle - inv_start_arc_angle - end_arc_angle) / 90;
q_arc.rad = radius;
/* handle special case of start & end point in the same quarter */
if(((start_angle / 90) == (end_angle / 90)) && (nbarc <= 0)) {
q_arc.quarter = (start_angle / 90) % 4;
q_arc.angle = start_arc_angle;
get_subarc_control_points(&q_arc, end_arc_angle - start_arc_angle);
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
return;
}
if(cw) {
/* partial starting arc */
if(start_arc_angle > 0) {
q_arc.quarter = (start_angle / 90) % 4;
q_arc.angle = start_arc_angle;
/* get cubic points relative to center */
get_arc_control_points(&q_arc, true);
/* put cubic points in arc_path */
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
}
/* full arcs */
for(int32_t q = 0; q < nbarc ; q++) {
q_arc.quarter = (q + ((start_angle + 89) / 90)) % 4;
q_arc.angle = 90;
/* get cubic points relative to center */
get_arc_control_points(&q_arc, true); /* 2nd parameter 'start' ignored */
/* put cubic points in arc_path */
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
}
/* partial ending arc */
if(end_arc_angle > 0) {
q_arc.quarter = (end_angle / 90) % 4;
q_arc.angle = end_arc_angle;
/* get cubic points relative to center */
get_arc_control_points(&q_arc, false);
/* put cubic points in arc_path */
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
}
}
else { /* counter clockwise */
/* partial ending arc */
if(end_arc_angle > 0) {
q_arc.quarter = (end_angle / 90) % 4;
q_arc.angle = end_arc_angle;
/* get cubic points relative to center */
get_arc_control_points(&q_arc, false);
/* put cubic points in arc_path */
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
}
/* full arcs */
for(int32_t q = nbarc - 1; q >= 0; q--) {
q_arc.quarter = (q + ((start_angle + 89) / 90)) % 4;
q_arc.angle = 90;
/* get cubic points relative to center */
get_arc_control_points(&q_arc, true); /* 2nd parameter 'start' ignored */
/* put cubic points in arc_path */
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
}
/* partial starting arc */
if(start_arc_angle > 0) {
q_arc.quarter = (start_angle / 90) % 4;
q_arc.angle = start_arc_angle;
/* get cubic points relative to center */
get_arc_control_points(&q_arc, true);
/* put cubic points in arc_path */
add_split_arc_path(arc_path, pidx, &q_arc, center, cw);
}
}
}
#endif /*LV_USE_GPU_NXP_VG_LITE*/

View File

@@ -0,0 +1,79 @@
/**
* @file lv_draw_vglite_arc.h
*
*/
/**
* MIT License
*
* Copyright 2021, 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_DRAW_VGLITE_ARC_H
#define LV_DRAW_VGLITE_ARC_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_VG_LITE
#include "lv_gpu_nxp_vglite.h"
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* GLOBAL PROTOTYPES
**********************/
/***
* Draw arc shape with effects
* @param draw_ctx drawing context
* @param dsc the arc description structure (width, rounded ending, opacity)
* @param center the coordinates of the arc center
* @param radius the radius of external arc
* @param start_angle the starting angle in degrees
* @param end_angle the ending angle in degrees
*/
lv_res_t lv_gpu_nxp_vglite_draw_arc(lv_draw_ctx_t * draw_ctx, const lv_draw_arc_dsc_t * dsc, const lv_point_t * center,
int32_t radius, int32_t start_angle, int32_t end_angle);
/**********************
* MACROS
**********************/
#endif /*LV_USE_GPU_NXP_VG_LITE*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_DRAW_VGLITE_ARC_H*/

View File

@@ -0,0 +1,618 @@
/**
* @file lv_draw_vglite_blend.c
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_draw_vglite_blend.h"
#if LV_USE_GPU_NXP_VG_LITE
/*********************
* DEFINES
*********************/
/* Enable BLIT quality degradation workaround for RT595, recommended for screen's dimension > 352 pixels */
#define RT595_BLIT_WRKRND_ENABLED 1
/* Internal compound symbol */
#if (defined(CPU_MIMXRT595SFFOB) || defined(CPU_MIMXRT595SFFOB_cm33) || \
defined(CPU_MIMXRT595SFFOC) || defined(CPU_MIMXRT595SFFOC_cm33)) && \
RT595_BLIT_WRKRND_ENABLED
#define VG_LITE_BLIT_SPLIT_ENABLED 1
#else
#define VG_LITE_BLIT_SPLIT_ENABLED 0
#endif
/**
* BLIT split threshold - BLITs with width or height higher than this value will be done
* in multiple steps. Value must be 16-aligned. Don't change.
*/
#define LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR 352
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
/**
* BLock Image Transfer - single direct BLIT.
*
* @param[in] blit Description of the transfer
* @retval LV_RES_OK Transfer complete
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_VG_LITE_LOG_ERRORS)
*/
static lv_res_t _lv_gpu_nxp_vglite_blit_single(lv_gpu_nxp_vglite_blit_info_t * blit);
#if VG_LITE_BLIT_SPLIT_ENABLED
/**
* Move buffer pointer as close as possible to area, but with respect to alignment requirements. X-axis only.
*
* @param[in,out] area Area to be updated
* @param[in,out] buf Pointer to be updated
*/
static void _align_x(lv_area_t * area, lv_color_t ** buf);
/**
* Move buffer pointer to the area start and update variables, Y-axis only.
*
* @param[in,out] area Area to be updated
* @param[in,out] buf Pointer to be updated
* @param[in] stridePx Buffer stride in pixels
*/
static void _align_y(lv_area_t * area, lv_color_t ** buf, uint32_t stridePx);
/**
* Software BLIT as a fall-back scenario.
*
* @param[in] blit BLIT configuration
*/
static void _sw_blit(lv_gpu_nxp_vglite_blit_info_t * blit);
/**
* Verify BLIT structure - widths, stride, pointer alignment
*
* @param[in] blit BLIT configuration
* @retval LV_RES_OK
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_VG_LITE_LOG_ERRORS)
*/
static lv_res_t _lv_gpu_nxp_vglite_check_blit(lv_gpu_nxp_vglite_blit_info_t * blit);
/**
* BLock Image Transfer - split BLIT.
*
* @param[in] blit BLIT configuration
* @retval LV_RES_OK Transfer complete
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_VG_LITE_LOG_ERRORS)
*/
static lv_res_t _lv_gpu_nxp_vglite_blit_split(lv_gpu_nxp_vglite_blit_info_t * blit);
#endif
/**********************
* STATIC VARIABLES
**********************/
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
lv_res_t lv_gpu_nxp_vglite_fill(lv_color_t * dest_buf, lv_coord_t dest_width, lv_coord_t dest_height,
const lv_area_t * fill_area, lv_color_t color, lv_opa_t opa)
{
uint32_t area_size = lv_area_get_size(fill_area);
lv_coord_t area_w = lv_area_get_width(fill_area);
lv_coord_t area_h = lv_area_get_height(fill_area);
if(opa >= (lv_opa_t)LV_OPA_MAX) {
if(area_size < LV_GPU_NXP_VG_LITE_FILL_SIZE_LIMIT)
VG_LITE_RETURN_INV("Area size %d smaller than limit %d.", area_size, LV_GPU_NXP_VG_LITE_FILL_SIZE_LIMIT);
}
else {
if(area_size < LV_GPU_NXP_VG_LITE_FILL_OPA_SIZE_LIMIT)
VG_LITE_RETURN_INV("Area size %d smaller than limit %d.", area_size, LV_GPU_NXP_VG_LITE_FILL_OPA_SIZE_LIMIT);
}
vg_lite_buffer_t vgbuf;
vg_lite_rectangle_t rect;
vg_lite_error_t err = VG_LITE_SUCCESS;
lv_color32_t col32 = {.full = lv_color_to32(color)}; /*Convert color to RGBA8888*/
vg_lite_color_t vgcol; /* vglite takes ABGR */
if(lv_vglite_init_buf(&vgbuf, (uint32_t)dest_width, (uint32_t)dest_height, (uint32_t)dest_width * sizeof(lv_color_t),
(const lv_color_t *)dest_buf, false) != LV_RES_OK)
VG_LITE_RETURN_INV("Init buffer failed.");
if(opa >= (lv_opa_t)LV_OPA_MAX) { /*Opaque fill*/
rect.x = fill_area->x1;
rect.y = fill_area->y1;
rect.width = area_w;
rect.height = area_h;
/*Clean & invalidate cache*/
lv_vglite_invalidate_cache();
#if LV_COLOR_DEPTH==16
vgcol = col32.full;
#else /*LV_COLOR_DEPTH==32*/
vgcol = ((uint32_t)col32.ch.alpha << 24) | ((uint32_t)col32.ch.blue << 16) | ((uint32_t)col32.ch.green << 8) |
(uint32_t)col32.ch.red;
#endif
err = vg_lite_clear(&vgbuf, &rect, vgcol);
VG_LITE_ERR_RETURN_INV(err, "Clear failed.");
err = vg_lite_finish();
VG_LITE_ERR_RETURN_INV(err, "Finish failed.");
}
else { /*fill with transparency*/
vg_lite_path_t path;
int32_t path_data[] = { /*VG rectangular path*/
VLC_OP_MOVE, fill_area->x1, fill_area->y1,
VLC_OP_LINE, fill_area->x2 + 1, fill_area->y1,
VLC_OP_LINE, fill_area->x2 + 1, fill_area->y2 + 1,
VLC_OP_LINE, fill_area->x1, fill_area->y2 + 1,
VLC_OP_LINE, fill_area->x1, fill_area->y1,
VLC_OP_END
};
err = vg_lite_init_path(&path, VG_LITE_S32, VG_LITE_LOW, sizeof(path_data), path_data,
(vg_lite_float_t) fill_area->x1, (vg_lite_float_t) fill_area->y1,
((vg_lite_float_t) fill_area->x2) + 1.0f, ((vg_lite_float_t) fill_area->y2) + 1.0f);
VG_LITE_ERR_RETURN_INV(err, "Init path failed.");
/* Only pre-multiply color if hardware pre-multiplication is not present */
if(!vg_lite_query_feature(gcFEATURE_BIT_VG_PE_PREMULTIPLY)) {
col32.ch.red = (uint8_t)(((uint16_t)col32.ch.red * opa) >> 8);
col32.ch.green = (uint8_t)(((uint16_t)col32.ch.green * opa) >> 8);
col32.ch.blue = (uint8_t)(((uint16_t)col32.ch.blue * opa) >> 8);
}
col32.ch.alpha = opa;
#if LV_COLOR_DEPTH==16
vgcol = col32.full;
#else /*LV_COLOR_DEPTH==32*/
vgcol = ((uint32_t)col32.ch.alpha << 24) | ((uint32_t)col32.ch.blue << 16) | ((uint32_t)col32.ch.green << 8) |
(uint32_t)col32.ch.red;
#endif
/*Clean & invalidate cache*/
lv_vglite_invalidate_cache();
vg_lite_matrix_t matrix;
vg_lite_identity(&matrix);
/*Draw rectangle*/
err = vg_lite_draw(&vgbuf, &path, VG_LITE_FILL_EVEN_ODD, &matrix, VG_LITE_BLEND_SRC_OVER, vgcol);
VG_LITE_ERR_RETURN_INV(err, "Draw rectangle failed.");
err = vg_lite_finish();
VG_LITE_ERR_RETURN_INV(err, "Finish failed.");
err = vg_lite_clear_path(&path);
VG_LITE_ERR_RETURN_INV(err, "Clear path failed.");
}
return LV_RES_OK;
}
lv_res_t lv_gpu_nxp_vglite_blit(lv_gpu_nxp_vglite_blit_info_t * blit)
{
uint32_t dest_size = lv_area_get_size(&blit->dst_area);
if(blit->opa >= (lv_opa_t)LV_OPA_MAX) {
if(dest_size < LV_GPU_NXP_VG_LITE_BLIT_SIZE_LIMIT)
VG_LITE_RETURN_INV("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_VG_LITE_BLIT_SIZE_LIMIT);
}
else {
if(dest_size < LV_GPU_NXP_VG_LITE_BLIT_OPA_SIZE_LIMIT)
VG_LITE_RETURN_INV("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_VG_LITE_BLIT_OPA_SIZE_LIMIT);
}
#if VG_LITE_BLIT_SPLIT_ENABLED
return _lv_gpu_nxp_vglite_blit_split(blit);
#endif /* non RT595 */
/* Just pass down */
return _lv_gpu_nxp_vglite_blit_single(blit);
}
lv_res_t lv_gpu_nxp_vglite_blit_transform(lv_gpu_nxp_vglite_blit_info_t * blit)
{
uint32_t dest_size = lv_area_get_size(&blit->dst_area);
if(blit->opa >= (lv_opa_t)LV_OPA_MAX) {
if(dest_size < LV_GPU_NXP_VG_LITE_BLIT_SIZE_LIMIT)
VG_LITE_RETURN_INV("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_VG_LITE_BLIT_SIZE_LIMIT);
}
else {
if(dest_size < LV_GPU_NXP_VG_LITE_BLIT_OPA_SIZE_LIMIT)
VG_LITE_RETURN_INV("Area size %d smaller than limit %d.", dest_size, LV_GPU_NXP_VG_LITE_BLIT_OPA_SIZE_LIMIT);
}
return _lv_gpu_nxp_vglite_blit_single(blit);
}
/**********************
* STATIC FUNCTIONS
**********************/
#if VG_LITE_BLIT_SPLIT_ENABLED
static lv_res_t _lv_gpu_nxp_vglite_blit_split(lv_gpu_nxp_vglite_blit_info_t * blit)
{
lv_res_t rv = LV_RES_INV;
if(_lv_gpu_nxp_vglite_check_blit(blit) != LV_RES_OK) {
PRINT_BLT("Blit check failed\n");
return LV_RES_INV;
}
PRINT_BLT("BLIT from: "
"Area: %03d,%03d - %03d,%03d "
"Addr: %d\n\n",
blit->src_area.x1, blit->src_area.y1,
blit->src_area.x2, blit->src_area.y2,
(uintptr_t) blit->src);
PRINT_BLT("BLIT to: "
"Area: %03d,%03d - %03d,%03d "
"Addr: %d\n\n",
blit->dst_area.x1, blit->dst_area.y1,
blit->dst_area.x2, blit->dst_area.y2,
(uintptr_t) blit->src);
/* Stage 1: Move starting pointers as close as possible to [x1, y1], so coordinates are as small as possible. */
_align_x(&blit->src_area, (lv_color_t **)&blit->src);
_align_y(&blit->src_area, (lv_color_t **)&blit->src, blit->src_stride / sizeof(lv_color_t));
_align_x(&blit->dst_area, (lv_color_t **)&blit->dst);
_align_y(&blit->dst_area, (lv_color_t **)&blit->dst, blit->dst_stride / sizeof(lv_color_t));
/* Stage 2: If we're in limit, do a single BLIT */
if((blit->src_area.x2 < LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR) &&
(blit->src_area.y2 < LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR)) {
PRINT_BLT("Simple blit!\n");
return _lv_gpu_nxp_vglite_blit_single(blit);
};
/* Stage 3: Split the BLIT into multiple tiles */
PRINT_BLT("Split blit!\n");
PRINT_BLT("Blit "
"([%03d,%03d], [%03d,%03d]) -> "
"([%03d,%03d], [%03d,%03d]) | "
"([%03dx%03d] -> [%03dx%03d]) | "
"A:(%d -> %d)\n",
blit->src_area.x1, blit->src_area.y1, blit->src_area.x2, blit->src_area.y2,
blit->dst_area.x1, blit->dst_area.y1, blit->dst_area.x2, blit->dst_area.y2,
lv_area_get_width(&blit->src_area), lv_area_get_height(&blit->src_area),
lv_area_get_width(&blit->dst_area), lv_area_get_height(&blit->dst_area),
(uintptr_t) blit->src, (uintptr_t) blit->dst);
lv_coord_t totalWidth = lv_area_get_width(&blit->src_area);
lv_coord_t totalHeight = lv_area_get_height(&blit->src_area);
lv_gpu_nxp_vglite_blit_info_t tileBlit;
/* Number of tiles needed */
int totalTilesX = (blit->src_area.x1 + totalWidth + LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1) /
LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR;
int totalTilesY = (blit->src_area.y1 + totalHeight + LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1) /
LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR;
/* src and dst buffer shift against each other. Src buffer real data [0,0] may start actually at [3,0] in buffer, as
* the buffer pointer has to be aligned, while dst buffer real data [0,0] may start at [1,0] in buffer. alignment may be
* different */
int shiftSrcX = (blit->src_area.x1 > blit->dst_area.x1) ? (blit->src_area.x1 - blit->dst_area.x1) : 0;
int shiftDstX = (blit->src_area.x1 < blit->dst_area.x1) ? (blit->dst_area.x1 - blit->src_area.x1) : 0;
PRINT_BLT("\n");
PRINT_BLT("Align shift: src: %d, dst: %d\n", shiftSrcX, shiftDstX);
tileBlit = *blit;
for(int tileY = 0; tileY < totalTilesY; tileY++) {
tileBlit.src_area.y1 = 0; /* no vertical alignment, always start from 0 */
tileBlit.src_area.y2 = totalHeight - tileY * LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1;
if(tileBlit.src_area.y2 >= LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR) {
tileBlit.src_area.y2 = LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1; /* Should never happen */
}
tileBlit.src = blit->src + tileY * LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR * blit->src_stride / sizeof(
lv_color_t); /* stride in px! */
tileBlit.dst_area.y1 = tileBlit.src_area.y1; /* y has no alignment, always in sync with src */
tileBlit.dst_area.y2 = tileBlit.src_area.y2;
tileBlit.dst = blit->dst + tileY * LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR * blit->dst_stride / sizeof(
lv_color_t); /* stride in px! */
for(int tileX = 0; tileX < totalTilesX; tileX++) {
if(tileX == 0) {
/* 1st tile is special - there may be a gap between buffer start pointer
* and area.x1 value, as the pointer has to be aligned.
* tileBlit.src pointer - keep init value from Y-loop.
* Also, 1st tile start is not shifted! shift is applied from 2nd tile */
tileBlit.src_area.x1 = blit->src_area.x1;
tileBlit.dst_area.x1 = blit->dst_area.x1;
}
else {
/* subsequent tiles always starts from 0, but shifted*/
tileBlit.src_area.x1 = 0 + shiftSrcX;
tileBlit.dst_area.x1 = 0 + shiftDstX;
/* and advance start pointer + 1 tile size */
tileBlit.src += LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR;
tileBlit.dst += LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR;
}
/* Clip tile end coordinates */
tileBlit.src_area.x2 = totalWidth + blit->src_area.x1 - tileX * LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1;
if(tileBlit.src_area.x2 >= LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR) {
tileBlit.src_area.x2 = LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1;
}
tileBlit.dst_area.x2 = totalWidth + blit->dst_area.x1 - tileX * LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1;
if(tileBlit.dst_area.x2 >= LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR) {
tileBlit.dst_area.x2 = LV_GPU_NXP_VG_LITE_BLIT_SPLIT_THR - 1;
}
if(tileX < (totalTilesX - 1)) {
/* And adjust end coords if shifted, but not for last tile! */
tileBlit.src_area.x2 += shiftSrcX;
tileBlit.dst_area.x2 += shiftDstX;
}
rv = _lv_gpu_nxp_vglite_blit_single(&tileBlit);
#if BLIT_DBG_AREAS
lv_vglite_dbg_draw_rectangle((lv_color_t *) tileBlit.dst, tileBlit.dst_width, tileBlit.dst_height, &tileBlit.dst_area,
LV_COLOR_RED);
lv_vglite_dbg_draw_rectangle((lv_color_t *) tileBlit.src, tileBlit.src_width, tileBlit.src_height, &tileBlit.src_area,
LV_COLOR_GREEN);
#endif
PRINT_BLT("Tile [%d, %d]: "
"([%d,%d], [%d,%d]) -> "
"([%d,%d], [%d,%d]) | "
"([%dx%d] -> [%dx%d]) | "
"A:(0x%8X -> 0x%8X) %s\n",
tileX, tileY,
tileBlit.src_area.x1, tileBlit.src_area.y1, tileBlit.src_area.x2, tileBlit.src_area.y2,
tileBlit.dst_area.x1, tileBlit.dst_area.y1, tileBlit.dst_area.x2, tileBlit.dst_area.y2,
lv_area_get_width(&tileBlit.src_area), lv_area_get_height(&tileBlit.src_area),
lv_area_get_width(&tileBlit.dst_area), lv_area_get_height(&tileBlit.dst_area),
(uintptr_t) tileBlit.src, (uintptr_t) tileBlit.dst,
rv == LV_RES_OK ? "OK!" : "!!! FAILED !!!");
if(rv != LV_RES_OK) { /* if anything goes wrong... */
#if LV_GPU_NXP_VG_LITE_LOG_ERRORS
LV_LOG_ERROR("Split blit failed. Trying SW blit instead.");
#endif
_sw_blit(&tileBlit);
rv = LV_RES_OK; /* Don't report error, as SW BLIT was performed */
}
}
PRINT_BLT(" \n");
}
return rv; /* should never fail */
}
#endif /* VG_LITE_BLIT_SPLIT_ENABLED */
static lv_res_t _lv_gpu_nxp_vglite_blit_single(lv_gpu_nxp_vglite_blit_info_t * blit)
{
vg_lite_buffer_t src_vgbuf, dst_vgbuf;
vg_lite_error_t err = VG_LITE_SUCCESS;
uint32_t rect[4];
vg_lite_float_t scale = 1.0;
if(blit == NULL) {
/*Wrong parameter*/
return LV_RES_INV;
}
if(blit->opa < (lv_opa_t) LV_OPA_MIN) {
return LV_RES_OK; /*Nothing to BLIT*/
}
/*Wrap src/dst buffer into VG-Lite buffer*/
if(lv_vglite_init_buf(&src_vgbuf, (uint32_t)blit->src_width, (uint32_t)blit->src_height, (uint32_t)blit->src_stride,
blit->src, true) != LV_RES_OK)
VG_LITE_RETURN_INV("Init buffer failed.");
if(lv_vglite_init_buf(&dst_vgbuf, (uint32_t)blit->dst_width, (uint32_t)blit->dst_height, (uint32_t)blit->dst_stride,
blit->dst, false) != LV_RES_OK)
VG_LITE_RETURN_INV("Init buffer failed.");
rect[0] = (uint32_t)blit->src_area.x1; /* start x */
rect[1] = (uint32_t)blit->src_area.y1; /* start y */
rect[2] = (uint32_t)blit->src_area.x2 - (uint32_t)blit->src_area.x1 + 1U; /* width */
rect[3] = (uint32_t)blit->src_area.y2 - (uint32_t)blit->src_area.y1 + 1U; /* height */
vg_lite_matrix_t matrix;
vg_lite_identity(&matrix);
vg_lite_translate((vg_lite_float_t)blit->dst_area.x1, (vg_lite_float_t)blit->dst_area.y1, &matrix);
if((blit->angle != 0) || (blit->zoom != LV_IMG_ZOOM_NONE)) {
vg_lite_translate(blit->pivot.x, blit->pivot.y, &matrix);
vg_lite_rotate(blit->angle / 10.0f, &matrix); /* angle is 1/10 degree */
scale = 1.0f * blit->zoom / LV_IMG_ZOOM_NONE;
vg_lite_scale(scale, scale, &matrix);
vg_lite_translate(0.0f - blit->pivot.x, 0.0f - blit->pivot.y, &matrix);
}
/*Clean & invalidate cache*/
lv_vglite_invalidate_cache();
uint32_t color;
vg_lite_blend_t blend;
if(blit->opa >= (lv_opa_t)LV_OPA_MAX) {
color = 0xFFFFFFFFU;
blend = VG_LITE_BLEND_SRC_OVER;
src_vgbuf.transparency_mode = VG_LITE_IMAGE_TRANSPARENT;
}
else {
uint32_t opa = (uint32_t)blit->opa;
if(vg_lite_query_feature(gcFEATURE_BIT_VG_PE_PREMULTIPLY)) {
color = (opa << 24) | 0x00FFFFFFU;
}
else {
color = (opa << 24) | (opa << 16) | (opa << 8) | opa;
}
blend = VG_LITE_BLEND_SRC_OVER;
src_vgbuf.image_mode = VG_LITE_MULTIPLY_IMAGE_MODE;
src_vgbuf.transparency_mode = VG_LITE_IMAGE_TRANSPARENT;
}
err = vg_lite_blit_rect(&dst_vgbuf, &src_vgbuf, rect, &matrix, blend, color, VG_LITE_FILTER_POINT);
VG_LITE_ERR_RETURN_INV(err, "Blit rectangle failed.");
err = vg_lite_finish();
VG_LITE_ERR_RETURN_INV(err, "Finish failed.");
return LV_RES_OK;
}
#if VG_LITE_BLIT_SPLIT_ENABLED
static void _sw_blit(lv_gpu_nxp_vglite_blit_info_t * blit)
{
int x, y;
lv_coord_t w = lv_area_get_width(&blit->src_area);
lv_coord_t h = lv_area_get_height(&blit->src_area);
int32_t srcStridePx = blit->src_stride / (int32_t)sizeof(lv_color_t);
int32_t dstStridePx = blit->dst_stride / (int32_t)sizeof(lv_color_t);
lv_color_t * src = (lv_color_t *)blit->src + blit->src_area.y1 * srcStridePx + blit->src_area.x1;
lv_color_t * dst = (lv_color_t *)blit->dst + blit->dst_area.y1 * dstStridePx + blit->dst_area.x1;
if(blit->opa >= (lv_opa_t)LV_OPA_MAX) {
/* simple copy */
for(y = 0; y < h; y++) {
lv_memcpy(dst, src, (uint32_t)w * sizeof(lv_color_t));
src += srcStridePx;
dst += dstStridePx;
}
}
else if(blit->opa >= LV_OPA_MIN) {
/* alpha blending */
for(y = 0; y < h; y++) {
for(x = 0; x < w; x++) {
dst[x] = lv_color_mix(src[x], dst[x], blit->opa);
}
src += srcStridePx;
dst += dstStridePx;
}
}
}
static lv_res_t _lv_gpu_nxp_vglite_check_blit(lv_gpu_nxp_vglite_blit_info_t * blit)
{
/* Test for minimal width */
if(lv_area_get_width(&blit->src_area) < (lv_coord_t)LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX)
VG_LITE_RETURN_INV("Src area width (%d) is smaller than required (%d).", lv_area_get_width(&blit->src_area),
LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX);
/* Test for minimal width */
if(lv_area_get_width(&blit->dst_area) < (lv_coord_t)LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX)
VG_LITE_RETURN_INV("Dest area width (%d) is smaller than required (%d).", lv_area_get_width(&blit->dst_area),
LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX);
/* Test for pointer alignment */
if((((uintptr_t) blit->src) % LV_ATTRIBUTE_MEM_ALIGN_SIZE) != 0x0)
VG_LITE_RETURN_INV("Src buffer ptr (0x%X) not aligned to %d.", (size_t) blit->src, LV_ATTRIBUTE_MEM_ALIGN_SIZE);
/* No alignment requirement for destination pixel buffer when using mode VG_LITE_LINEAR */
/* Test for stride alignment */
if((blit->src_stride % (LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX * LV_COLOR_DEPTH / 8)) != 0x0)
VG_LITE_RETURN_INV("Src buffer stride (%d px) not aligned to %d px.", blit->src_stride,
LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX);
/* Test for stride alignment */
if((blit->dst_stride % (LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX * LV_COLOR_DEPTH / 8)) != 0x0)
VG_LITE_RETURN_INV("Dest buffer stride (%d px) not aligned to %d px.", blit->dst_stride,
LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX);
if((lv_area_get_width(&blit->src_area) != lv_area_get_width(&blit->dst_area)) ||
(lv_area_get_height(&blit->src_area) != lv_area_get_height(&blit->dst_area)))
VG_LITE_RETURN_INV("Src and dest buffer areas are not equal.");
return LV_RES_OK;
}
static void _align_x(lv_area_t * area, lv_color_t ** buf)
{
int alignedAreaStartPx = area->x1 - (area->x1 % (LV_ATTRIBUTE_MEM_ALIGN_SIZE * 8 / LV_COLOR_DEPTH));
VG_LITE_COND_STOP(alignedAreaStartPx < 0, "Negative X alignment.");
area->x1 -= alignedAreaStartPx;
area->x2 -= alignedAreaStartPx;
*buf += alignedAreaStartPx;
}
static void _align_y(lv_area_t * area, lv_color_t ** buf, uint32_t stridePx)
{
int LineToAlignMem;
int alignedAreaStartPy;
/* find how many lines of pixels will respect memory alignment requirement */
if(stridePx % (uint32_t)LV_ATTRIBUTE_MEM_ALIGN_SIZE == 0U) {
alignedAreaStartPy = area->y1;
}
else {
LineToAlignMem = LV_ATTRIBUTE_MEM_ALIGN_SIZE / (sizeof(lv_color_t) * LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX);
VG_LITE_COND_STOP(LV_ATTRIBUTE_MEM_ALIGN_SIZE % (sizeof(lv_color_t) * LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX),
"Complex case: need gcd function.");
alignedAreaStartPy = area->y1 - (area->y1 % LineToAlignMem);
VG_LITE_COND_STOP(alignedAreaStartPy < 0, "Negative Y alignment.");
}
area->y1 -= alignedAreaStartPy;
area->y2 -= alignedAreaStartPy;
*buf += (uint32_t)alignedAreaStartPy * stridePx;
}
#endif /*VG_LITE_BLIT_SPLIT_ENABLED*/
#endif /*LV_USE_GPU_NXP_VG_LITE*/

View File

@@ -0,0 +1,149 @@
/**
* @file lv_draw_vglite_blend.h
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_DRAW_VGLITE_BLEND_H
#define LV_DRAW_VGLITE_BLEND_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_VG_LITE
#include "lv_gpu_nxp_vglite.h"
/*********************
* DEFINES
*********************/
#ifndef LV_GPU_NXP_VG_LITE_FILL_SIZE_LIMIT
/** Minimum area (in pixels) to be filled by VG-Lite with 100% opacity*/
#define LV_GPU_NXP_VG_LITE_FILL_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_VG_LITE_FILL_OPA_SIZE_LIMIT
/** Minimum area (in pixels) to be filled by VG-Lite with transparency*/
#define LV_GPU_NXP_VG_LITE_FILL_OPA_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_VG_LITE_BLIT_SIZE_LIMIT
/** Minimum area (in pixels) for image copy with 100% opacity to be handled by VG-Lite*/
#define LV_GPU_NXP_VG_LITE_BLIT_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_VG_LITE_BUFF_SYNC_BLIT_SIZE_LIMIT
/** Minimum invalidated area (in pixels) to be synchronized by VG-Lite during buffer sync */
#define LV_GPU_NXP_VG_LITE_BUFF_SYNC_BLIT_SIZE_LIMIT 5000
#endif
#ifndef LV_GPU_NXP_VG_LITE_BLIT_OPA_SIZE_LIMIT
/** Minimum area (in pixels) for image copy with transparency to be handled by VG-Lite*/
#define LV_GPU_NXP_VG_LITE_BLIT_OPA_SIZE_LIMIT 5000
#endif
/**********************
* TYPEDEFS
**********************/
/**
* BLock Image Transfer descriptor structure
*/
typedef struct {
const lv_color_t * src; /**< Source buffer pointer (must be aligned on 32 bytes)*/
lv_area_t src_area; /**< Area to be copied from source*/
lv_coord_t src_width; /**< Source buffer width*/
lv_coord_t src_height; /**< Source buffer height*/
int32_t src_stride; /**< Source buffer stride in bytes (must be aligned on 16 px)*/
const lv_color_t * dst; /**< Destination buffer pointer (must be aligned on 32 bytes)*/
lv_area_t dst_area; /**< Target area in destination buffer (must be the same as src_area)*/
lv_coord_t dst_width; /**< Destination buffer width*/
lv_coord_t dst_height; /**< Destination buffer height*/
int32_t dst_stride; /**< Destination buffer stride in bytes (must be aligned on 16 px)*/
lv_opa_t opa; /**< Opacity - alpha mix (0 = source not copied, 255 = 100% opaque)*/
uint32_t angle; /**< Rotation angle (1/10 of degree)*/
uint32_t zoom; /**< 256 = no zoom (1:1 scale ratio)*/
lv_point_t pivot; /**< The coordinates of rotation pivot in source image buffer*/
} lv_gpu_nxp_vglite_blit_info_t;
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* Fill area, with optional opacity.
*
* @param[in/out] dest_buf Destination buffer pointer (must be aligned on 32 bytes)
* @param[in] dest_width Destination buffer width in pixels (must be aligned on 16 px)
* @param[in] dest_height Destination buffer height in pixels
* @param[in] fill_area Area to be filled
* @param[in] color Fill color
* @param[in] opa Opacity (255 = full, 128 = 50% background/50% color, 0 = no fill)
* @retval LV_RES_OK Fill completed
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_VG_LITE_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_vglite_fill(lv_color_t * dest_buf, lv_coord_t dest_width, lv_coord_t dest_height,
const lv_area_t * fill_area, lv_color_t color, lv_opa_t opa);
/**
* BLock Image Transfer.
*
* @param[in] blit Description of the transfer
* @retval LV_RES_OK Transfer complete
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_VG_LITE_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_vglite_blit(lv_gpu_nxp_vglite_blit_info_t * blit);
/**
* BLock Image Transfer with transformation.
*
* @param[in] blit Description of the transfer
* @retval LV_RES_OK Transfer complete
* @retval LV_RES_INV Error occurred (\see LV_GPU_NXP_VG_LITE_LOG_ERRORS)
*/
lv_res_t lv_gpu_nxp_vglite_blit_transform(lv_gpu_nxp_vglite_blit_info_t * blit);
/**********************
* MACROS
**********************/
#endif /*LV_USE_GPU_NXP_VG_LITE*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_DRAW_VGLITE_BLEND_H*/

View File

@@ -0,0 +1,244 @@
/**
* @file lv_draw_vglite_rect.c
*
*/
/**
* MIT License
*
* Copyright 2021, 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_draw_vglite_rect.h"
#if LV_USE_GPU_NXP_VG_LITE
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
/**********************
* STATIC VARIABLES
**********************/
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
lv_res_t lv_gpu_nxp_vglite_draw_bg(lv_draw_ctx_t * draw_ctx, const lv_draw_rect_dsc_t * dsc, const lv_area_t * coords)
{
vg_lite_buffer_t vgbuf;
vg_lite_error_t err = VG_LITE_SUCCESS;
lv_coord_t dest_width = lv_area_get_width(draw_ctx->buf_area);
lv_coord_t dest_height = lv_area_get_height(draw_ctx->buf_area);
vg_lite_path_t path;
vg_lite_color_t vgcol; /* vglite takes ABGR */
vg_lite_matrix_t matrix;
lv_coord_t width = lv_area_get_width(coords);
lv_coord_t height = lv_area_get_height(coords);
vg_lite_linear_gradient_t gradient;
vg_lite_matrix_t * grad_matrix;
if(dsc->radius < 0)
return LV_RES_INV;
/* Make areas relative to draw buffer */
lv_area_t rel_coords;
lv_area_copy(&rel_coords, coords);
lv_area_move(&rel_coords, -draw_ctx->buf_area->x1, -draw_ctx->buf_area->y1);
lv_area_t rel_clip;
lv_area_copy(&rel_clip, draw_ctx->clip_area);
lv_area_move(&rel_clip, -draw_ctx->buf_area->x1, -draw_ctx->buf_area->y1);
/*** Init destination buffer ***/
if(lv_vglite_init_buf(&vgbuf, (uint32_t)dest_width, (uint32_t)dest_height, (uint32_t)dest_width * sizeof(lv_color_t),
(const lv_color_t *)draw_ctx->buf, false) != LV_RES_OK)
VG_LITE_RETURN_INV("Init buffer failed.");
/*** Init path ***/
int32_t rad = dsc->radius;
if(dsc->radius == LV_RADIUS_CIRCLE) {
rad = (width > height) ? height / 2 : width / 2;
}
if((dsc->radius == LV_RADIUS_CIRCLE) && (width == height)) {
float tang = ((float)rad * BEZIER_OPTIM_CIRCLE);
int32_t cpoff = (int32_t)tang;
int32_t circle_path[] = { /*VG circle path*/
VLC_OP_MOVE, rel_coords.x1 + rad, rel_coords.y1,
VLC_OP_CUBIC_REL, cpoff, 0, rad, rad - cpoff, rad, rad, /* top-right */
VLC_OP_CUBIC_REL, 0, cpoff, cpoff - rad, rad, 0 - rad, rad, /* bottom-right */
VLC_OP_CUBIC_REL, 0 - cpoff, 0, 0 - rad, cpoff - rad, 0 - rad, 0 - rad, /* bottom-left */
VLC_OP_CUBIC_REL, 0, 0 - cpoff, rad - cpoff, 0 - rad, rad, 0 - rad, /* top-left */
VLC_OP_END
};
err = vg_lite_init_path(&path, VG_LITE_S32, VG_LITE_HIGH, sizeof(circle_path), circle_path,
(vg_lite_float_t) rel_clip.x1, (vg_lite_float_t) rel_clip.y1,
((vg_lite_float_t) rel_clip.x2) + 1.0f, ((vg_lite_float_t) rel_clip.y2) + 1.0f);
}
else if(dsc->radius > 0) {
float tang = ((float)rad * BEZIER_OPTIM_CIRCLE);
int32_t cpoff = (int32_t)tang;
int32_t rounded_path[] = { /*VG rounded rectangular path*/
VLC_OP_MOVE, rel_coords.x1 + rad, rel_coords.y1,
VLC_OP_LINE, rel_coords.x2 - rad + 1, rel_coords.y1, /* top */
VLC_OP_CUBIC_REL, cpoff, 0, rad, rad - cpoff, rad, rad, /* top-right */
VLC_OP_LINE, rel_coords.x2 + 1, rel_coords.y2 - rad + 1, /* right */
VLC_OP_CUBIC_REL, 0, cpoff, cpoff - rad, rad, 0 - rad, rad, /* bottom-right */
VLC_OP_LINE, rel_coords.x1 + rad, rel_coords.y2 + 1, /* bottom */
VLC_OP_CUBIC_REL, 0 - cpoff, 0, 0 - rad, cpoff - rad, 0 - rad, 0 - rad, /* bottom-left */
VLC_OP_LINE, rel_coords.x1, rel_coords.y1 + rad, /* left */
VLC_OP_CUBIC_REL, 0, 0 - cpoff, rad - cpoff, 0 - rad, rad, 0 - rad, /* top-left */
VLC_OP_END
};
err = vg_lite_init_path(&path, VG_LITE_S32, VG_LITE_HIGH, sizeof(rounded_path), rounded_path,
(vg_lite_float_t) rel_clip.x1, (vg_lite_float_t) rel_clip.y1,
((vg_lite_float_t) rel_clip.x2) + 1.0f, ((vg_lite_float_t) rel_clip.y2) + 1.0f);
}
else {
int32_t rect_path[] = { /*VG rectangular path*/
VLC_OP_MOVE, rel_coords.x1, rel_coords.y1,
VLC_OP_LINE, rel_coords.x2 + 1, rel_coords.y1,
VLC_OP_LINE, rel_coords.x2 + 1, rel_coords.y2 + 1,
VLC_OP_LINE, rel_coords.x1, rel_coords.y2 + 1,
VLC_OP_LINE, rel_coords.x1, rel_coords.y1,
VLC_OP_END
};
err = vg_lite_init_path(&path, VG_LITE_S32, VG_LITE_LOW, sizeof(rect_path), rect_path,
(vg_lite_float_t) rel_clip.x1, (vg_lite_float_t) rel_clip.y1,
((vg_lite_float_t) rel_clip.x2) + 1.0f, ((vg_lite_float_t) rel_clip.y2) + 1.0f);
}
VG_LITE_ERR_RETURN_INV(err, "Init path failed.");
vg_lite_identity(&matrix);
/*** Init Color/Gradient ***/
if(dsc->bg_grad.dir != (lv_grad_dir_t)LV_GRAD_DIR_NONE) {
uint32_t colors[2];
uint32_t stops[2];
lv_color32_t col32[2];
/* Gradient setup */
uint8_t cnt = MAX(dsc->bg_grad.stops_count, 2);
for(uint8_t i = 0; i < cnt; i++) {
col32[i].full = lv_color_to32(dsc->bg_grad.stops[i].color); /*Convert color to RGBA8888*/
stops[i] = dsc->bg_grad.stops[i].frac;
#if LV_COLOR_DEPTH==16
colors[i] = ((uint32_t)col32[i].ch.alpha << 24) | ((uint32_t)col32[i].ch.blue << 16) |
((uint32_t)col32[i].ch.green << 8) | (uint32_t)col32[i].ch.red;
#else /*LV_COLOR_DEPTH==32*/
/* watchout: red and blue color components are inverted versus vg_lite_color_t order */
colors[i] = ((uint32_t)col32[i].ch.alpha << 24) | ((uint32_t)col32[i].ch.red << 16) |
((uint32_t)col32[i].ch.green << 8) | (uint32_t)col32[i].ch.blue;
#endif
}
lv_memset_00(&gradient, sizeof(vg_lite_linear_gradient_t));
err = vg_lite_init_grad(&gradient);
VG_LITE_ERR_RETURN_INV(err, "Init gradient failed");
err = vg_lite_set_grad(&gradient, cnt, colors, stops);
VG_LITE_ERR_RETURN_INV(err, "Set gradient failed.");
err = vg_lite_update_grad(&gradient);
VG_LITE_ERR_RETURN_INV(err, "Update gradient failed.");
grad_matrix = vg_lite_get_grad_matrix(&gradient);
vg_lite_identity(grad_matrix);
vg_lite_translate((float)rel_coords.x1, (float)rel_coords.y1, grad_matrix);
if(dsc->bg_grad.dir == (lv_grad_dir_t)LV_GRAD_DIR_VER) {
vg_lite_scale(1.0f, (float)height / 256.0f, grad_matrix);
vg_lite_rotate(90.0f, grad_matrix);
}
else { /*LV_GRAD_DIR_HOR*/
vg_lite_scale((float)width / 256.0f, 1.0f, grad_matrix);
}
}
lv_opa_t bg_opa = dsc->bg_opa;
lv_color32_t bg_col32 = {.full = lv_color_to32(dsc->bg_color)}; /*Convert color to RGBA8888*/
if(bg_opa <= (lv_opa_t)LV_OPA_MAX) {
/* Only pre-multiply color if hardware pre-multiplication is not present */
if(!vg_lite_query_feature(gcFEATURE_BIT_VG_PE_PREMULTIPLY)) {
bg_col32.ch.red = (uint8_t)(((uint16_t)bg_col32.ch.red * bg_opa) >> 8);
bg_col32.ch.green = (uint8_t)(((uint16_t)bg_col32.ch.green * bg_opa) >> 8);
bg_col32.ch.blue = (uint8_t)(((uint16_t)bg_col32.ch.blue * bg_opa) >> 8);
}
bg_col32.ch.alpha = bg_opa;
}
#if LV_COLOR_DEPTH==16
vgcol = bg_col32.full;
#else /*LV_COLOR_DEPTH==32*/
vgcol = ((uint32_t)bg_col32.ch.alpha << 24) | ((uint32_t)bg_col32.ch.blue << 16) |
((uint32_t)bg_col32.ch.green << 8) | (uint32_t)bg_col32.ch.red;
#endif
/*Clean & invalidate cache*/
lv_vglite_invalidate_cache();
/*** Draw rectangle ***/
if(dsc->bg_grad.dir == (lv_grad_dir_t)LV_GRAD_DIR_NONE) {
err = vg_lite_draw(&vgbuf, &path, VG_LITE_FILL_EVEN_ODD, &matrix, VG_LITE_BLEND_SRC_OVER, vgcol);
}
else {
err = vg_lite_draw_gradient(&vgbuf, &path, VG_LITE_FILL_EVEN_ODD, &matrix, &gradient, VG_LITE_BLEND_SRC_OVER);
}
VG_LITE_ERR_RETURN_INV(err, "Draw gradient failed.");
err = vg_lite_finish();
VG_LITE_ERR_RETURN_INV(err, "Finish failed.");
err = vg_lite_clear_path(&path);
VG_LITE_ERR_RETURN_INV(err, "Clear path failed.");
if(dsc->bg_grad.dir != (lv_grad_dir_t)LV_GRAD_DIR_NONE) {
err = vg_lite_clear_grad(&gradient);
VG_LITE_ERR_RETURN_INV(err, "Clear gradient failed.");
}
return LV_RES_OK;
}
/**********************
* STATIC FUNCTIONS
**********************/
#endif /*LV_USE_GPU_NXP_VG_LITE*/

View File

@@ -0,0 +1,77 @@
/**
* @file lv_draw_vglite_rect.h
*
*/
/**
* MIT License
*
* Copyright 2021, 2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_DRAW_VGLITE_RECT_H
#define LV_DRAW_VGLITE_RECT_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_VG_LITE
#include "lv_gpu_nxp_vglite.h"
#include "../../lv_draw_rect.h"
/*********************
* DEFINES
*********************/
/**********************
* TYPEDEFS
**********************/
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* Draw rectangle shape with effects (rounded corners, gradient)
*
* @param draw_ctx drawing context
* @param dsc description of the rectangle
* @param coords the area where rectangle is clipped
*/
lv_res_t lv_gpu_nxp_vglite_draw_bg(lv_draw_ctx_t * draw_ctx, const lv_draw_rect_dsc_t * dsc, const lv_area_t * coords);
/**********************
* MACROS
**********************/
#endif /*LV_USE_GPU_NXP_VG_LITE*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_DRAW_VGLITE_RECT_H*/

View File

@@ -0,0 +1,153 @@
/**
* @file lv_gpu_nxp_vglite.c
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
/*********************
* INCLUDES
*********************/
#include "lv_gpu_nxp_vglite.h"
#if LV_USE_GPU_NXP_VG_LITE
#include "../../../core/lv_refr.h"
#if BLIT_DBG_AREAS
#include "lv_draw_vglite_blend.h"
#endif
/*********************
* DEFINES
*********************/
#if LV_COLOR_DEPTH==16
#define VG_LITE_PX_FMT VG_LITE_RGB565
#elif LV_COLOR_DEPTH==32
#define VG_LITE_PX_FMT VG_LITE_BGRA8888
#else
#error Only 16bit and 32bit color depth are supported. Set LV_COLOR_DEPTH to 16 or 32.
#endif
/**********************
* TYPEDEFS
**********************/
/**********************
* STATIC PROTOTYPES
**********************/
/**********************
* STATIC VARIABLES
**********************/
/**********************
* MACROS
**********************/
/**********************
* GLOBAL FUNCTIONS
**********************/
lv_res_t lv_vglite_init_buf(vg_lite_buffer_t * vgbuf, uint32_t width, uint32_t height, uint32_t stride,
const lv_color_t * ptr, bool source)
{
/*Test for memory alignment*/
if((((uintptr_t)ptr) % (uintptr_t)LV_ATTRIBUTE_MEM_ALIGN_SIZE) != (uintptr_t)0x0U)
VG_LITE_RETURN_INV("%s buffer (0x%x) not aligned to %d.", source ? "Src" : "Dest",
(size_t) ptr, LV_ATTRIBUTE_MEM_ALIGN_SIZE);
/*Test for stride alignment*/
if(source && (stride % (LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX * sizeof(lv_color_t))) != 0x0U)
VG_LITE_RETURN_INV("Src buffer stride (%d bytes) not aligned to %d bytes.", stride,
LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX * sizeof(lv_color_t));
vgbuf->format = VG_LITE_PX_FMT;
vgbuf->tiled = VG_LITE_LINEAR;
vgbuf->image_mode = VG_LITE_NORMAL_IMAGE_MODE;
vgbuf->transparency_mode = VG_LITE_IMAGE_OPAQUE;
vgbuf->width = (int32_t)width;
vgbuf->height = (int32_t)height;
vgbuf->stride = (int32_t)stride;
lv_memset_00(&vgbuf->yuv, sizeof(vgbuf->yuv));
vgbuf->memory = (void *)ptr;
vgbuf->address = (uint32_t)vgbuf->memory;
vgbuf->handle = NULL;
return LV_RES_OK;
}
#if BLIT_DBG_AREAS
void lv_vglite_dbg_draw_rectangle(lv_color_t * dest_buf, lv_coord_t dest_width, lv_coord_t dest_height,
lv_area_t * fill_area, lv_color_t color)
{
lv_area_t a;
/* top line */
a.x1 = fill_area->x1;
a.x2 = fill_area->x2;
a.y1 = fill_area->y1;
a.y2 = fill_area->y1;
lv_gpu_nxp_vglite_fill(dest_buf, dest_width, dest_height, &a, color, LV_OPA_COVER);
/* bottom line */
a.x1 = fill_area->x1;
a.x2 = fill_area->x2;
a.y1 = fill_area->y2;
a.y2 = fill_area->y2;
lv_gpu_nxp_vglite_fill(dest_buf, dest_width, dest_height, &a, color, LV_OPA_COVER);
/* left line */
a.x1 = fill_area->x1;
a.x2 = fill_area->x1;
a.y1 = fill_area->y1;
a.y2 = fill_area->y2;
lv_gpu_nxp_vglite_fill(dest_buf, dest_width, dest_height, &a, color, LV_OPA_COVER);
/* right line */
a.x1 = fill_area->x2;
a.x2 = fill_area->x2;
a.y1 = fill_area->y1;
a.y2 = fill_area->y2;
lv_gpu_nxp_vglite_fill(dest_buf, dest_width, dest_height, &a, color, LV_OPA_COVER);
}
#endif /* BLIT_DBG_AREAS */
void lv_vglite_invalidate_cache(void)
{
lv_disp_t * disp = _lv_refr_get_disp_refreshing();
if(disp->driver->clean_dcache_cb)
disp->driver->clean_dcache_cb(disp->driver);
}
/**********************
* STATIC FUNCTIONS
**********************/
#endif /*LV_USE_GPU_NXP_VG_LITE*/

View File

@@ -0,0 +1,185 @@
/**
* @file lv_gpu_nxp_vglite.h
*
*/
/**
* MIT License
*
* Copyright 2020-2022 NXP
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights to
* use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
* the Software, and to permit persons to whom the Software is furnished to do so,
* subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next paragraph)
* shall be included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
* INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
* PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
* CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
* OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef LV_GPU_NXP_VGLITE_H
#define LV_GPU_NXP_VGLITE_H
#ifdef __cplusplus
extern "C" {
#endif
/*********************
* INCLUDES
*********************/
#include "../../../lv_conf_internal.h"
#if LV_USE_GPU_NXP_VG_LITE
#include "vg_lite.h"
#include "../../sw/lv_draw_sw.h"
#include "../../../misc/lv_log.h"
#include "fsl_debug_console.h"
/*********************
* DEFINES
*********************/
/** Use this symbol as limit to disable feature (value has to be larger than supported resolution) */
#define LV_GPU_NXP_VG_LITE_FEATURE_DISABLED (1920*1080+1)
/** Stride in px required by VG-Lite HW. Don't change this. */
#define LV_GPU_NXP_VG_LITE_STRIDE_ALIGN_PX 16U
#ifndef LV_GPU_NXP_VG_LITE_LOG_ERRORS
/** Enable logging of VG-Lite errors (\see LV_LOG_ERROR)*/
#define LV_GPU_NXP_VG_LITE_LOG_ERRORS 1
#endif
#ifndef LV_GPU_NXP_VG_LITE_LOG_TRACES
/** Enable logging of VG-Lite errors (\see LV_LOG_ERROR)*/
#define LV_GPU_NXP_VG_LITE_LOG_TRACES 0
#endif
/* Draw rectangles around BLIT tiles */
#define BLIT_DBG_AREAS 0
/* Print detailed info to SDK console (NOT to LVGL log system) */
#define BLIT_DBG_VERBOSE 0
/* Verbose debug print */
#if BLIT_DBG_VERBOSE
#define PRINT_BLT PRINTF
#else
#define PRINT_BLT(...)
#endif
/* The optimal Bezier control point offset for radial unit
* see: https://spencermortensen.com/articles/bezier-circle/
**/
#define BEZIER_OPTIM_CIRCLE 0.551915024494f
/* Draw lines for control points of Bezier curves */
#define BEZIER_DBG_CONTROL_POINTS 0
/**********************
* TYPEDEFS
**********************/
/**********************
* GLOBAL PROTOTYPES
**********************/
/**
* Fills vg_lite_buffer_t structure according given parameters.
*
* @param[in/out] vgbuf Buffer structure to be filled
* @param[in] width Width of buffer in pixels
* @param[in] height Height of buffer in pixels
* @param[in] stride Stride of the buffer in bytes
* @param[in] ptr Pointer to the buffer (must be aligned according VG-Lite requirements)
* @param[in] source Boolean to check if this is a source buffer
*/
lv_res_t lv_vglite_init_buf(vg_lite_buffer_t * vgbuf, uint32_t width, uint32_t height, uint32_t stride,
const lv_color_t * ptr, bool source);
#if BLIT_DBG_AREAS
/**
* Draw a simple rectangle, 1 px line width.
*
* @param dest_buf Destination buffer
* @param dest_width Destination buffer width (must be aligned on 16px)
* @param dest_height Destination buffer height
* @param fill_area Rectangle coordinates
* @param color Rectangle color
*/
void lv_vglite_dbg_draw_rectangle(lv_color_t * dest_buf, lv_coord_t dest_width, lv_coord_t dest_height,
lv_area_t * fill_area, lv_color_t color);
#endif
/**
* Clean & invalidate cache.
*/
void lv_vglite_invalidate_cache(void);
/**********************
* MACROS
**********************/
#define VG_LITE_COND_STOP(cond, txt) \
do { \
if (cond) { \
LV_LOG_ERROR("%s. STOP!", txt); \
for ( ; ; ); \
} \
} while(0)
#if LV_GPU_NXP_VG_LITE_LOG_ERRORS
#define VG_LITE_ERR_RETURN_INV(err, fmt, ...) \
do { \
if(err != VG_LITE_SUCCESS) { \
LV_LOG_ERROR(fmt, ##__VA_ARGS__); \
return LV_RES_INV; \
} \
} while (0)
#else
#define VG_LITE_ERR_RETURN_INV(err, fmt, ...) \
do { \
if(err != VG_LITE_SUCCESS) { \
return LV_RES_INV; \
} \
}while(0)
#endif /*LV_GPU_NXP_VG_LITE_LOG_ERRORS*/
#if LV_GPU_NXP_VG_LITE_LOG_TRACES
#define VG_LITE_LOG_TRACE(fmt, ...) \
do { \
LV_LOG_ERROR(fmt, ##__VA_ARGS__); \
} while (0)
#define VG_LITE_RETURN_INV(fmt, ...) \
do { \
LV_LOG_ERROR(fmt, ##__VA_ARGS__); \
return LV_RES_INV; \
} while (0)
#else
#define VG_LITE_LOG_TRACE(fmt, ...) \
do { \
} while (0)
#define VG_LITE_RETURN_INV(fmt, ...) \
do { \
return LV_RES_INV; \
}while(0)
#endif /*LV_GPU_NXP_VG_LITE_LOG_TRACES*/
#endif /*LV_USE_GPU_NXP_VG_LITE*/
#ifdef __cplusplus
} /*extern "C"*/
#endif
#endif /*LV_GPU_NXP_VGLITE_H*/