When we want to add a watermark to a video while recording it, there are two options:

A. The video frame data is converted to Bitmap, watermark is drawn on bitmap, and then the watermarked Bitmap is converted to frame data

This scheme can achieve watermark addition, although the use of RenderScript inline function to increase efficiency, but because of the frame data to bitmap and then to the frame data conversion process, the overall efficiency is still slow

B. Directly add watermark YUV data to the obtained YUV frame data, the specific steps are as follows:

​ 1.Draw the watermark content on Bitmap in advance and convert Bitmap into byte array in YUV format

​ 2.The video raw frame data is obtained

​ 3.Assign the effective watermark part in the YUV array to the corresponding position of the original frame data

Let’s talk about plan B in detail:

  1. The watermark content is drawn to Bitmap and then converted into byte array. bitmap is white characters on black background. The reason why black background is convenient for us to synthesize YUV frame data according to color judgment
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
private byte[] getOsdByte() {
Bitmap bitmap=Bitmap.createBitmap(width,height, Bitmap.Config.ARGB_8888)
Canvas canvas = new Canvas(bitmap);
canvas.drawColor(getResource.getColor(R.color.black));
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
paint.setColor(Color.WHITE);
paint.setTextSize(80);
canvas.drawText("Test text", CameraSettings.SRC_IMAGE_WIDTH/2, 100, paint);
byte[] newBytes = bitmapToNv12(bitmap,CameraSettings.SRC_IMAGE_WIDTH, CameraSettings.SRC_IMAGE_HEIGHT);
if(newBytes!=null){
return newBytes;
}else{
return null;
}
}

byte[] bitmapToNV12(Bitmap scaled, int inputWidth, int inputHeight) {
int[] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}


public static void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
}
index ++;
}
}
}

The above conversion is to use NV12, this YUV data format as an example to convert, it should be noted that the specific use of what data format needs to be based on the current mobile phone support Color format Decided, here is a brief explanation

2.The video frame data obtained is mainly the frame data obtained by Camera. Here, attention should be paid to Color format
Mobile MediaCodec coDEC color format is generally:

  • COLOR_FormatYUV420Planar
  • COLOR_FormatYUV420SemiPlanar

YUV420Planar supports two color formats: NV12 and NV21
YUV420SemiPlannesupports two color formats: I420, YV12

The corresponding relationship is as follows:

  • I420: YYYYYYYY UU VV => Standard YUV420P
  • YV12: YYYYYYYY VV UU => a species of YUV420P
  • NV12: YYYYYYYY UVUV => Standard YUV420SP
  • NV21: YYYYYYYY VUVU => a species of YUV420SP

If there is no special configuration when previewing the Camera, It returns the NV21 format by default. When the color format set by the encoder is inconsistent with the color format previewed by the Camera, it is necessary to convert the data format, otherwise it will lead to the phenomenon of blooming screen and black screen

3.Effectively assign the watermark part of the YUV array to the corresponding position of the original frame data

This method is to cover the valid part of the watermark data in NV12 format (that is, the white part of the white word on black background mentioned above) to the original NV12 video frame data, and the corresponding value of array B is overwritten to array A

  • offset_x is the X-axis offset of B from A
  • offset_y is the Y-axis offset of B from A
1
2
3
4
5
6
7
8
9
10
public static void mergeOsd(byte[] nv12_A, byte[] nv12_B, int offset_x, int offset_y, int a_width, int a_height,int b_width, int b_height) {
for (int i = 0; i < b_height; i++) {
for (int j = 0; j < b_width; j++) {
//If it is not black, add the watermark pixel, black (#ff000000) is black with a value of 16
if(nv12_B[i * b_width + j] != 16) {
nv12_A[i * a_width + offset_y * a_width + j + offset_x] = nv12_B[i * b_width + j];
}
}
}
}

In this way, array A is the frame data covered with a valid watermark and can be encoded by the encoder