前言
近期,由于老师的要求,我们需要实现将Android Studio摄像头获取的视频实时传输到服务端,而在手机端实时显示。我们决定采用Opencv与Socket实现该功能,网上相关教程较少,因此我们决定写篇文章用于记录一下。
一、如何在Android Studio上部署OpenCv
关于OpenCV在Android Studio上部署,网上已经有许多教程了,我们采用直接接入OpenCV的java SDK的方式。(感谢这位大佬的细致介绍)
1.下载opencv
进入OpenCV的官网下载opencv,在界面中选择Android,我下载的是OpenCV4.2.0
2.Android Studio创建项目
在Android 项目右上角选择SDK Manager - Android SDK - SDK Tools,确保下载了CMake, NDK。
鉴于以后可能会使用ncnn的原因,我们采用native c++的方式创建了项目。 此后的选择与参考了大佬的步骤
3.配置OpenCV
创建了项目后,点击File->New->Import Module, 引入Opencv - android中的java文件夹,我的路径如下
opencv-4.2.0-android-sdk\OpenCV-android-sdk\sdk
出现黄色警报后可以勾选import选项并更改名称为opencv: 选择Finish后导入了作为Module的OpenCV,此时选择build.gradle(注意,一定是OpenCV的build.gradle),接下来更改
apply plugin: 'com.android.application'
改为:
apply plugin: 'com.android.library'
并删除这一行
defaultConfig { applicationId “org.opencv” }
为了避免不必要的错误,尽可能调整app的build.gradle与opencv的build.gradle中
compileSdkVersion 29 buildToolsVersion “29.0.2”
此时,将app 的build.gradle中的
externalNativeBuild { cmake { cppFlags “-std=c++14” } }
更改为
externalNativeBuild {
cmake {
arguments "-DANDROID_STL=c++_shared"
}
用于连接opencv的共享库。 在最后
dependencies{}
之中加入
implementation project( ':opencv')
对应数值相同,如果仍可能出错,则应调整自己下载的OpenCV或SDK,点击Sync now,等待同步成功。 接下来应在AndroidMainifest.xml中申请摄像头与网络权限。
<uses-sdk tools:overrideLibrary="android.support.compat, android.arch.lifecycle" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-feature
android:name="android.hardware.camera"
android:required="false" />
<uses-feature
android:name="android.hardware.camera.autofocus"
android:required="false" />
<uses-feature
android:name="android.hardware.camera.front"
android:required="false" />
<uses-feature
android:name="android.hardware.camera.front.autofocus"
android:required="false" />
在Andorid 项目目录
app\src\main
下创建文件夹,其名为(为与后续步骤一致,因此此处位置可以按我的方式放置)
jniLibs
并将下载的opencv-android中的libs文件复制到此文件夹下,其目录结构为 由于动态链接库*.so文件在cmke默认下是并不是按照以上的路径寻找的,因此需要在cpp文件夹下的CMakeLists.txt下加入
set_target_properties(libopencv_java4 PROPERTIES IMPORTED_LOCATION
${CMAKE_SOURCE_DIR}/../jniLibs/libs/${ANDROID_ABI}/libopencv_java4.so)
其中CMAKE_SOURCE_DIR表示了CMakeList.txt所在的文件夹,同时加入
add_library(libopencv_java4 SHARED IMPORTED)
target_link_libraries( # Specifies the target library.
libopencv_java4 # 链接opencv的so
# Links the target library to the log library
# included in the NDK.
${log-lib} )
此时opencv已经可以使用了,我们通过以上步骤可实现用JNI的方式引入了OpenCV。
二、用户端代码实现
1.利用Application实现多界面共享Socket
由于Socket在Activity中创建时,其生命周期与Activity生命周期一致,为实现Socket在不同界面的共享,我们采用了Application的方式。自定义一个Class,创建的java class名为MySocket
package com.example.myapplication;
import android.app.Application;
import java.net.Socket;
public class MySocket extends Application {
Socket socket = null;
public Socket getSocket() {
System.out.println(socket);
return socket;
}
public void setSocket(Socket socket) {
this.socket = socket;
}
}
并在AndroidManifest.xml文件中的application标签下添加:
android:name= ".MySocket"
在后续调用中,Application可以将MySocket的生命周期调整与APP生命周期一致,因此可以实现APP与服务器的长连接。
2.更改MainActivity
在此我们并不打算将MainActivitzy作为主界面,而是用来初始化MySocekt,MainActivity的代码如下:
import androidx.appcompat.app.AppCompatActivity;
import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import java.net.Socket;
public class MainActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Thread myThread=new Thread(){
@Override
public void run() {
try{
Socket socket = new Socket("127.0.0.1",Integer.parseInt("5050"));
((MySocket)getApplication()).setSocket(socket);
System.out.println("ok");
}catch (Exception e){
e.printStackTrace();
}
}
};
myThread.start();
setContentView(R.layout.activity_main);
MainActivity.this.startActivity(new Intent(MainActivity.this.getApplicationContext(), StartActivity.class));
MainActivity.this.finish();
}
}
主要注意Socket的初始化,并且Socket要放在子线程中。
2.创建StartActivity做主界面
本段并没有特别的注意事项,只需要设计一个打开摄像头的按钮即可,以下是activity_start.xml的代码
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".StartActivity">
<LinearLayout
android:layout_height="match_parent"
android:layout_width="match_parent"
android:gravity="center"
android:layout_weight="1">
<Button
android:layout_width="280dp"
android:layout_height="32dp"
android:text="测试按钮"
android:id="@+id/camera"
android:textColor="#000"
tools:ignore="InvalidId" />
</LinearLayout>
</androidx.constraintlayout.widget.ConstraintLayout>
其界面代码为
package com.example.myapplication;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
public class StartActivity extends AppCompatActivity {
private Button button;
@Override
protected void onCreate(Bundle savedInstanceState) {
button = findViewById(R.id.camera);
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_start);
button.setOnClickListener(new View.OnClickListener(){
@Override
public void onClick(View v) {
Intent intent = new Intent();
intent.setClass(StartActivity.this, CameraGetActivity.class);
StartActivity.this.startActivity(intent);
}});
}
}
3.创建OpenCV的展示界面
我们利用opencv可以获取手机摄像头,并将其显示到屏幕上。 以下为新建展示界面CameraGetActivity的源码
package com.example.myapplication;
import android.os.Bundle;
import android.util.Log;
import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraActivity;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.JavaCamera2View;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Mat;
import java.util.ArrayList;
import java.util.List;
public class CameraGetActivity extends CameraActivity {
private static final String TAG = "OpencvCam";
private JavaCamera2View javaCameraView;
private int cameraId = JavaCamera2View.CAMERA_ID_ANY;
private CameraBridgeViewBase.CvCameraViewListener2 cvCameraViewListener2 = new CameraBridgeViewBase.CvCameraViewListener2() {
@Override
public void onCameraViewStarted(int width, int height) {
Log.i(TAG, "onCameraViewStarted width=" + width + ", height=" + height);
}
@Override
public void onCameraViewStopped() {
Log.i(TAG, "onCameraViewStopped");
}
@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
return inputFrame.rgba();
}
};
private BaseLoaderCallback baseLoaderCallback = new BaseLoaderCallback(this) {
@Override
public void onManagerConnected(int status) {
Log.i(TAG, "onManagerConnected status=" + status + ", javaCameraView=" + javaCameraView);
switch (status) {
case LoaderCallbackInterface.SUCCESS: {
if (javaCameraView != null) {
javaCameraView.setCvCameraViewListener(cvCameraViewListener2);
javaCameraView.disableFpsMeter();
javaCameraView.enableView();
}
}
break;
default:
super.onManagerConnected(status);
break;
}
}
};
@Override
protected List<? extends CameraBridgeViewBase> getCameraViewList() {
Log.i(TAG, "getCameraViewList");
List<CameraBridgeViewBase> list = new ArrayList<>();
list.add(javaCameraView);
return list;
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_cameraget);
findView();
}
private void findView() {
javaCameraView = findViewById(R.id.javaCameraView);
}
@Override
public void onPause() {
Log.i(TAG, "onPause");
super.onPause();
if (javaCameraView != null) {
javaCameraView.disableView();
}
}
@Override
public void onResume() {
Log.i(TAG, "onResume");
super.onResume();
if (OpenCVLoader.initDebug()) {
Log.i(TAG, "initDebug true");
baseLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
} else {
Log.i(TAG, "initDebug false");
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, baseLoaderCallback);
}
}
}
注意上述代码中
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { return inputFrame.rgba();}
opencv处理图像就是在这里进行的,inputFrame.rgba()返回的是摄像头获取到的每一帧图片,我们实现Socket传输图片也是这样进行的。 以上界面对应的布局文件activity_cameraget.xml为
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent">
<org.opencv.android.JavaCamera2View
android:id="@+id/javaCameraView"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:show_fps="true"
app:camera_id="any" />
</androidx.constraintlayout.widget.ConstraintLayout>
此时,展示的图像无法全屏显示,我们需要修改 OpenCV 的源码里 CameraBridgeViewBase.java 中的 deliverAndDrawFrame 方法,搜索到该方法后用以下代码替换:
private int rotationToDegree() {
WindowManager windowManager = (WindowManager) getContext().getSystemService(Context.WINDOW_SERVICE);
int rotation = windowManager.getDefaultDisplay().getRotation();
int degrees = 0;
switch(rotation) {
case Surface.ROTATION_0:
if(mCameraIndex == CAMERA_ID_FRONT) {
degrees = -90;
} else {
degrees = 90;
}
break;
case Surface.ROTATION_90:
break;
case Surface.ROTATION_180:
break;
case Surface.ROTATION_270:
if(mCameraIndex == CAMERA_ID_ANY || mCameraIndex == CAMERA_ID_BACK) {
degrees = 180;
}
break;
}
return degrees;
}
private float calcScale(int widthSource, int heightSource, int widthTarget, int heightTarget) {
if(widthTarget <= heightTarget) {
return (float) heightTarget / (float) heightSource;
} else {
return (float) widthTarget / (float) widthSource;
}
}
protected void deliverAndDrawFrame(CvCameraViewFrame frame) {
Mat modified;
if (mListener != null) {
modified = mListener.onCameraFrame(frame);
} else {
modified = frame.rgba();
}
boolean bmpValid = true;
if (modified != null) {
try {
Utils.matToBitmap(modified, mCacheBitmap);
} catch(Exception e) {
Log.e(TAG, "Mat type: " + modified);
Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
bmpValid = false;
}
}
if (bmpValid && mCacheBitmap != null) {
Canvas canvas = getHolder().lockCanvas();
if (canvas != null) {
canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
if (BuildConfig.DEBUG) Log.d(TAG, "mStretch value: " + mScale);
int degrees = rotationToDegree();
Matrix matrix = new Matrix();
matrix.postRotate(degrees);
Bitmap outputBitmap = Bitmap.createBitmap(mCacheBitmap, 0, 0, mCacheBitmap.getWidth(), mCacheBitmap.getHeight(), matrix, true);
if (outputBitmap.getWidth() <= canvas.getWidth()) {
mScale = calcScale(outputBitmap.getWidth(), outputBitmap.getHeight(), canvas.getWidth(), canvas.getHeight());
} else {
mScale = calcScale(canvas.getWidth(), canvas.getHeight(), outputBitmap.getWidth(), outputBitmap.getHeight());
}
if (mScale != 0) {
canvas.scale(mScale, mScale, 0, 0);
}
Log.d(TAG, "mStretch value: " + mScale);
canvas.drawBitmap(outputBitmap, 0, 0, null);
if (mFpsMeter != null) {
mFpsMeter.measure();
mFpsMeter.draw(canvas, 20, 30);
}
getHolder().unlockCanvasAndPost(canvas);
}
}
}
出现的没有导入包的错误可以用Alt+Enter键修复。 如果BuildConfig显示红色,不用管,运行后会在opencv下建立相应的.class文件。 我们在大佬的帮助下实现了利用opencv获取摄像头并显示,接下来我们将实现网络传输视频的每一帧。
4.Socket实现图像传输
我们在初始化时,可以通过Socket链接上服务器,现在我们调用初始化时的Socket实例。在void onCreate方法中接收,并建立对应的IO流:
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_cameraget);
findView();
socket = ((MySocket)getApplication()).getSocket();
try {
os = socket.getOutputStream();
pw = new PrintWriter(os);
ps = new PrintWriter(os);
is = socket.getInputStream();
}catch (IOException e) {
e.printStackTrace();}
}
在Mat onCameraFrame中实现数据的接收与发送,图像类型从Mat 转为Bitmap,图像编码采用Base64编码, 第一次利用pw发送编码后图像的长度信息,用flush()刷新后再发送图像信息,由于onCameraFrame方法本身就在一子线程中,故不需要再新建线程实现Socket的传输。
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
img = inputFrame.rgba();
Mat img0 = img.clone();
Mat dst = new Mat();
br=new BufferedReader(new InputStreamReader(is));
Imgproc.resize(img0,dst, new Size(256, 256));
Bitmap bitmap = Bitmap.createBitmap(dst.width(), dst.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(dst, bitmap,true);
ByteArrayOutputStream bout = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, bout);
byte[] imgBytes = bout.toByteArray();
String sender = Base64.encodeToString(imgBytes,Base64.DEFAULT);
String len_str = new String(String.format("%-16d", sender.length()).getBytes());
pw.write(len_str);
pw.flush();
try {
recevied = br.readLine();
System.out.println(recevied);
} catch (IOException e) {
System.out.println(e);
}
ps.write(sender);
ps.flush();
return img;
}
};
同时,我们希望退出摄像头后可以清除此界面的数据,而不是保存在堆栈中,因此我们应重写返回键监听函数
public boolean onKeyDown(int keyCode, KeyEvent event) {
if (keyCode == KeyEvent.KEYCODE_BACK && event.getRepeatCount() == 0) {
Intent resuleIntent=new Intent(CameraGetActivity.this, StartActivity.class);
CameraGetActivity.this.finish();
startActivity(resuleIntent);
}
return false;
}
同时,在AndroidMainfest.xml文件中为StartActivity添加` android:launchMode=“singleTask”
`这表示如果返回主界面,则清除堆栈中的其他界面。
CameraGetActivity最终代码如下:
```c
package com.example.myapplication;
import android.content.Intent;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.util.Base64;
import android.util.Log;
import android.view.KeyEvent;
import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraActivity;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.JavaCamera2View;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.android.Utils;
import org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import java.io.BufferedReader;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.net.Socket;
import java.util.ArrayList;
import java.util.List;
public class CameraGetActivity extends CameraActivity {
private static final String TAG = "OpencvCam";
private Socket socket = null;
private PrintWriter pw,ps;
private JavaCamera2View javaCameraView;
private OutputStream os;
private Mat img;
private InputStream is;
private BufferedReader br;
private String recevied;
private int cameraId = JavaCamera2View.CAMERA_ID_ANY;
private CameraBridgeViewBase.CvCameraViewListener2 cvCameraViewListener2 = new CameraBridgeViewBase.CvCameraViewListener2() {
@Override
public void onCameraViewStarted(int width, int height) {
Log.i(TAG, "onCameraViewStarted width=" + width + ", height=" + height);
}
@Override
public void onCameraViewStopped() {
Log.i(TAG, "onCameraViewStopped");
}
@Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
img = inputFrame.rgba();
Mat img0 = img.clone();
Mat dst = new Mat();
br=new BufferedReader(new InputStreamReader(is));
Imgproc.resize(img0,dst, new Size(256, 256));
Bitmap bitmap = Bitmap.createBitmap(dst.width(), dst.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(dst, bitmap,true);
ByteArrayOutputStream bout = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, bout);
byte[] imgBytes = bout.toByteArray();
String sender = Base64.encodeToString(imgBytes,Base64.DEFAULT);
String len_str = new String(String.format("%-16d", sender.length()).getBytes());
pw.write(len_str);
pw.flush();
try {
recevied = br.readLine();
System.out.println(recevied);
} catch (IOException e) {
System.out.println(e);
}
ps.write(sender);
ps.flush();
return img;
}
};
private BaseLoaderCallback baseLoaderCallback = new BaseLoaderCallback(this) {
@Override
public void onManagerConnected(int status) {
Log.i(TAG, "onManagerConnected status=" + status + ", javaCameraView=" + javaCameraView);
switch (status) {
case LoaderCallbackInterface.SUCCESS: {
if (javaCameraView != null) {
javaCameraView.setCvCameraViewListener(cvCameraViewListener2);
javaCameraView.disableFpsMeter();
javaCameraView.enableView();
}
}
break;
default:
super.onManagerConnected(status);
break;
}
}
};
@Override
protected List<? extends CameraBridgeViewBase> getCameraViewList() {
Log.i(TAG, "getCameraViewList");
List<CameraBridgeViewBase> list = new ArrayList<>();
list.add(javaCameraView);
return list;
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_cameraget);
findView();
socket = ((MySocket)getApplication()).getSocket();
try {
os = socket.getOutputStream();
pw = new PrintWriter(os);
ps = new PrintWriter(os);
is = socket.getInputStream();
}catch (IOException e) {
e.printStackTrace();}
}
private void findView() {
javaCameraView = findViewById(R.id.javaCameraView);
}
@Override
public void onPause() {
Log.i(TAG, "onPause");
super.onPause();
if (javaCameraView != null) {
javaCameraView.disableView();
}
}
@Override
public void onResume() {
Log.i(TAG, "onResume");
super.onResume();
if (OpenCVLoader.initDebug()) {
Log.i(TAG, "initDebug true");
baseLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
} else {
Log.i(TAG, "initDebug false");
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, baseLoaderCallback);
}
}
public boolean onKeyDown(int keyCode, KeyEvent event) {
if (keyCode == KeyEvent.KEYCODE_BACK && event.getRepeatCount() == 0) {
Intent resuleIntent=new Intent(CameraGetActivity.this, StartActivity.class);
CameraGetActivity.this.finish();
startActivity(resuleIntent);
}
return false;
}
}
以上为用户端代码,为实现视频流的传输,我们还需实现服务端的代码。
三、服务端
服务端我们采用opencv实现,以下是服务端代码
import numpy as np
import base64
from socket import *
import time
import msvcrt
import cv2
import numpy as np
import threading
import time
def recv_img(tcpCliSock):
while True:
start = time.time()#用于计算帧率信息
length = recvall(tcpCliSock,16)#获得图片文件的长度,16代表获取长度
try:
length = int(length)
# print(length)
except:
break
#continue
stringData = recvall(tcpCliSock, int(length))#根据获得的文件长度,获取图片文件
data = base64.b64decode(stringData)#将获取到的字符流数据转换成1维数组
np_arr = np.fromstring(data, np.uint8)
img=cv2.imdecode(np_arr,1)#将数组解码成图像
img_90 = cv2.flip(cv2.transpose(img), 1)#由于接收到的图片方向不对,故在此旋转
decimgs[0] = img_90
end = time.time()
seconds = end - start
try:
fps = 'FPS:'+str(int(1/seconds))
except:
pass
time.sleep(0.01)
def recvall(sock, count):
buf = b''#buf是一个byte类型
while count:
#接受TCP套接字的数据。数据以字符串形式返回,count指定要接收的最大数据量.
newbuf = sock.recv(count)
if not newbuf:
return None
buf += newbuf
count -= len(newbuf)
return buf
HOST = '127.0.0.1'
PORT = 5050
BUFSIZ = 1024
ADDR = (HOST,PORT)
tcpSerSock = socket(AF_INET,SOCK_STREAM)
tcpSerSock.bind(ADDR)
tcpSerSock.listen(2)
decimgs = [0]
threads = []
class N():
def __init__(self) -> None:
self.n = -3
n = N()
for i in threads:
i.start()
while True:
print('waiting for connection...')
tcpCliSock, addr = tcpSerSock.accept()
print('...connnecting from:', addr)
t2 = threading.Thread(target=recv_img,args=(tcpCliSock,))
t2.setDaemon(True)
t2.start()
while True:
img1=decimgs[0]
try:
cv2.imshow("recv1",img1)
except Exception as e:
print(e)
try:
tcpCliSock.send("收到".encode())
except:
tcpCliSock.send("-2\n".encode()) #为了保证用户端可以正确接收信息,而不阻塞,故添加无法识别图像时也返回信息
cv2.waitKey(1)
tcpCliSock.close()
tcpSerSock.close()
四、总结
以上便是通过socket与opencv实现网络传输图像的具体实现,完整的项目也可以去查看我的GitHub。希望可以帮助到你。
|