AndroiDAQ Targeting System, Using Pan, Tilt, and OpenCV Image Processing

Okay, now that you've read the first three articles about micro-controller and AndroiDAQ interfacing with stepper motors and servo motors, and how to build a pan and tilt assembly head using these motors, you also should know how to control this pan and tilt head with the micro-controller firmware on the AndroiDAQ module. Perhaps you even went as far as coding up some Java code for Android, to give you a touchscreen interfaced joystick, via your Android phone, to wired or wirelessly control your pan and tilt head with the AndroiDAQ module. If this is true, congratulations, as your learning efforts have paid off and you have made it quite far in the understanding of data acquisition, motion control, and robotics; well done!

 
AndroiDAQ Robot

Sneak peek of our lab's AndroiDAQ robot

However, all this manual human manipulation of the pan and tilt head on your part using a touchscreen, to make it go up and down, and then left and/or right, seems mundane, especially when one takes into consideration that most Android devices have more computing horsepower than what they had for the Apollo space missions to the moon. Surely some of this computing power can be harnessed and used to cause a web camera, which is mounted to the pan and tilt head, to acquire and track a selected target, or perhaps cause a Nerf gun, which is mounted to the pan and tilt head and uses a stationary webcam to find motion in a scene of interest, and then fire a Nerf missile at the subject causing the motion with some degree of accurately.

 

If the above ideas intrigue you, please read on, as these are the topics that will be discussed in this article. For this article, we will be using an AndroiDAQ module that is electronically interfaced with the pan and tilt head assembly, which is built as described from the last three articles. We will also utilize a laptop computer that is set up for Java development and discuss several Java program source code classes, which will provide the visual and operational interfaces to the web camera, the AndroiDAQ module, and the pan and tilt assembly using OpenCV. This article will discuss and provide some simple OpenCV algorithms which will use the webcam video output to provide a center of gravity location of the item of interest, which then will be used to control the pan and tilt head via the AndroiDAQ module. After you complete this article you will understand how to use a webcam to computer control a pan and tilt head with the AndroiDAQ module for facial tracking and/or for motion target acquisition, by using a few simple OpenCV algorithms and mathematical ratios.

 

First let’s discuss the Java development system. There are many blog posts and articles written that can teach you how to load the Eclipse Integrated Development Environment (IDE) and how to set it up for Java program development. Of course you can use any IDE that you are comfortable with and on any Operating System (OS) that you like. For example, our lab laptop runs Debian Linux and we prefer Eclipse for several reasons beyond the scope of this article, but you can use any IDE on a Windows system or Mac desktop or laptop for Java development.

 

To use the code provided in this article, you must download and install the OpenCV library that is appropriate for your OS, which is also covered by many fine blog posts and articles, so it won’t be discussed here. For this article it is assumed that you have a development system that is ready for Java development and that it also supports the Java Swing graphical user interface components. You will also need a USB webcam that is fully functional under your OS, and you will need to have installed the OpenCV library and implemented it into your IDE, so that the OpenCV libraries are available to Java under your IDE. We also suggest that you read through some of the basic OpenCV tutorials available at:

 

http://docs.opencv.org/doc/tutorials/tutorials.html

 

First we will need to develop a Java USB communications interface to connect and control the AndroiDAQ module over USB or Bluetooth. For this we will utilize the Java Simple Serial Connector (JSSC) version 2.8 library available here:

 

https://github.com/scream3r/java-simple-serial-connector/releases.

 

Simply download the zipped file, unzip it to a known location, and then link the jssc.jar file into your IDE referenced libraries. It is suggested to place the unzipped JSSC folder and associated library jar file in your workspace folder for your IDE, so that you can find them easily. There are many tutorials on how to implement the JSSC library for your IDE on the Internet, so we won’t go into detail about that in this article.

 

For a good, yet simple USB communications program, one typically needs to know what USB ports are available on the system, have a method to connect to the desired USB port, have a method to send serial text data to the USB port, and also have a method to receive textual data from the USB port. To find which USB ports are available on the system, we will utilize JSSC’s SerialPortList.getPortNames() function, which returns a string array of USB ports available, which are received from the OS. The usage of this and other JSSC classes and functions that are used in this article are explained in the JSSC JavaDoc folder, which is included in the JSSC zipped file that you downloaded and unzipped above. To utilize the returned ports string array we will need to populate a JComboBox from the Java Swing library when the user’s interface is built.

 

For connecting to the selected port from the JComboBox Port list, we will utilize a Swing JButton which utilizes an actionPerformed ActionListener. Here the JComboBox’s getSelectedItem() function is used, which returns the user selected port name, so that when the JButton is pressed, the actionPerformed method will try to connect to the selected port, via JSSC’s connect() function.

 

To implement read and write to and from the USB port, we will use a JTextArea and JScrollPanel for the output text field, and a JTextField with an ActionListener for the input text field, so that we only need to input the desired text into the JTextField box and then press the Enter to send the data to the USB port. Below is an image of what our AndroiDAQ COM interface looks like. Our AndroiDAQ_COM class is set up to extend JPanel and implement an ActionListener. As textual data is received from the USB port, the JTextArea’s scroll panel will show the scroll controls when text overflow in the control is achieved. To send text to the USB port, which will control the menu functions of the AndroiDAQ module (see below), one simply needs to enter the AndroiDAQ menu item number, for example a 5, to read the AndroiDAQ RTC clock, and press enter. The JTextField ActionListener utilizes JSSC’s writeString() function to send data to the USB port via Serial Port Protocol (SPP). Below is an image of how the AndroiDAQ-COM interface looks.

 

AndroiDAQComm Interface

 

Below is a listing of all AndroiDAQ data acquisition menu item commands and numbers for your reference:

 

Number to send:

Firmware sub-routine ran, in italics:

0

Prepare and receive the 16 channel configuration settings. SetPins

1

Set the 16 channels to the received configuration settings. SetConfig

2

Read input channels and send reading to Serial. GetInputs

3

Not used, this is a test loop for your subroutine development using testloop

4

Read ADC voltages and send these readings to Serial. GetVolts

5

Read AndroiDAQ Real Time Clock (RTC) and send values to Serial. GetTime

6

Auto-set RTC with system values. Note: you need to send date/time information to AndroiDAQ firmware in yy_MM_dd_E_kk_mm_ss format. SetTime

7

Read AndroiDAQ log file (log.txt) send strings to Serial. ReadLog

8

Delete AndroiDAQ log file. DeleteLog

9

Stop continuous input reading

10

Start continuous input reading

11

Start continuous voltage reading

12

Stop continuous voltage reading

13

Center Pan and Tilt example

14

Move Servo Motor

15

Move Stepper Motor

 

Below is the Java source code for this simple AndroiDAQ_COM Java program. It is also available for download at:

  

https://drive.google.com/open?id=0B65DMk4XWk-4OHlpS1Z1ZFMxQlE&authuser=0 

 

//AndroiDAQ_Comm.java
// Copyright Controlled Capture Systems, Rick Fluck 2013-2014 
/* Uses JSSC for JAVA available at: 
 *https://github.com/scream3r/java-simple-serial-connector/releases. 
 * Uses AndroiDAQ module and various firmware changes to control 
 * one servo and one stepper motor
 * AndroiDAQ information at: http://www.controlcapture.com/androiddaqmod
 */
import jssc.SerialPort;
import jssc.SerialPortEvent;
import jssc.SerialPortEventListener;
import jssc.SerialPortException;
import jssc.SerialPortList;
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
public class AndroiDAQ_Comm extends JPanel implements ActionListener {
	private static final long serialVersionUID = 1L;
	static SerialPort serialPort;	
	static Boolean fromMain = false;	
	JPanel mainPanel;
	JFrame frame;
	JComboBox<Object> selector;
	JButton connectButton;
	static JTextArea text;
	JTextField inputText;
	JScrollPane scroller;
	AndroiDAQ_Interface motiondec;
	String[] myPorts;
	
	public AndroiDAQ_Comm() {
    	super();
        // added below so other programs can use this  
        //String SerialPortID = "/dev/ttyAMA0";
        //System.setProperty("gnu.io.rxtx.SerialPorts", SerialPortID);
        myPorts = getAvailablePorts();
        if (fromMain) {
        	buildGUI();	
        } else {
        	try {
        		connect(myPorts[2].toString());
        	} catch ( Exception e ) {
        		e.printStackTrace();
        	}
        }
        //motiondec = new MotionDetector();
    }
	public  String[] getAvailablePorts() {
	    String[] list = SerialPortList.getPortNames();
	    return list;
       }
    void connect ( String portName ) throws Exception {
    	serialPort = new SerialPort(portName);
        if ( serialPort.isOpened() ) {
           	System.out.println("Error: Port is currently in use");
        } else {
        	try {
            	serialPort.openPort();//Open serial port
               	serialPort.setParams(SerialPort.BAUDRATE_115200, 
                                     SerialPort.DATABITS_8,
                                     SerialPort.STOPBITS_1,
                                     SerialPort.PARITY_NONE);
				//Set params. Also you can set params 
				//by this string: serialPort.setParams(9600, 8, 1, 0);
				//Add SerialPortEventListener
                serialPort.addEventListener(new SerialReader());
            	if (fromMain) {
            		text.append("Connected to USB Port" + "\n");
            	} else {
            		AndroiDAQ_Interface.text.append("Connected to USB Port" + "\n");
            	}
          	} catch (SerialPortException ex) {
              	System.out.println(ex);
            }
        }     
    }
    public static void disconnect() {
        if (serialPort != null) {
            try {
                serialPort.closePort();//Close serial port
            }  catch (SerialPortException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
	    }
            System.out.println("Disconnected to Serial Port");
        }
    }
    public static class SerialReader implements SerialPortEventListener {
    	int byteCount;
		@Override
		public void serialEvent(SerialPortEvent event) {
			
	     	if(event.isRXCHAR()){//If data is available
		    	//System.out.print("event: " + event.getEventValue());
				try {
		    		while ((byteCount = serialPort.getInputBufferBytesCount()) > 0) {
			  			String message = serialPort.readString(byteCount);
			  			//System.out.print("message: " + message.toString());
			 			if (!fromMain) {
		              		AndroiDAQ_Interface.text.append(message);
		         		}else {
		              		text.append(message + "\r");
		        		}
		 			}
	       		} catch (SerialPortException e) {
					// TODO Auto-generated catch block
					e.printStackTrace();
				}
	   		}
		}
    }
    public static void writetoport(String send) {
	    try {
	    	serialPort.writeString(send);
	    } catch ( SerialPortException e) {
	    	e.printStackTrace();
	    }
   }
    public void buildGUI() {
    	frame = new JFrame("AndroiDAQ COM");
		frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
    	JPanel panel = new JPanel(new BorderLayout());
		panel.setBorder(BorderFactory.createEmptyBorder(10,10,10,10));
		JPanel actionPanel  = new JPanel(new GridLayout(1, 0));
		JLabel input = new JLabel("Ports (select one):");
		selector = new JComboBox<Object>();
		selector.setModel(new DefaultComboBoxModel<Object>(myPorts));
		connectButton = new JButton("Connect to Port");
		connectButton.addActionListener(new ConnectListener());
		actionPanel.add(input);
		actionPanel.add(selector);
		actionPanel.add(connectButton);
		inputText = new JTextField(50);
		inputText.addActionListener(this);
		text = new JTextArea(10,50);
		text.setLineWrap(true);
		scroller = new JScrollPane (text);
		scroller.getVerticalScrollBar().addAdjustmentListener(new AdjustmentListener() {  
	        public void adjustmentValueChanged(AdjustmentEvent e) {  
	            e.getAdjustable().setValue(e.getAdjustable().getMaximum());  
	        }
	    });
		panel.add(actionPanel, BorderLayout.NORTH);
		panel.add(inputText, BorderLayout.CENTER);
		panel.add(scroller ,BorderLayout.SOUTH);
		//motionPanel = new MotionPanel(); // the sequence of pictures appear here
		//frame.getContentPane().add(BorderLayout.NORTH, inputText);
		frame.getContentPane().add(BorderLayout.NORTH, panel);
		//frame.getContentPane().add(BorderLayout.CENTER, motionPanel);
		//frame.setBounds(50,50,600,300);
		frame.pack();
		frame.setVisible(true);
    }
    public static void main ( String[] args ) {
   	fromMain = true;
    	new AndroiDAQ_Comm();
    } 
	@Override
	public void actionPerformed(ActionEvent arg0) {
		String text = "0" + inputText.getText() + "\r";
		//System.out.print("String: " + text + "\n");
		writetoport(text);
		inputText.setText("");
	}
	public class ConnectListener implements ActionListener {
		@Override
		public void actionPerformed(ActionEvent e) {
			String portToConnect = selector.getSelectedItem().toString();
			//System.out.print("Port to connect to is: " + portToConnect + "\r");
			try {
	            //connect("/dev/ttyUSB0");
	            connect(portToConnect);
	        } catch ( Exception e1 ) {
	            e1.printStackTrace();
	        } 
		}
	}
}

Now that we have a simple USB communications program, or better class, we need to write another class to view a live image from the USB web camera, which we will extend as a JPanel, as this new class will be used for our Main or viewer interface class that will be described later. This is where OpenCV comes into play as we will be using the OpenCV highgui.VideoCapture library to grab single images from the connected USB web camera and then convert this matrix image to a buffered image for display of the camera’s image on a Java based Graphics2D object. This later will all be done in a Java thread, so that the Main or viewer interface class can be updated as necessary without it appearing to be locked up during any heavy image processing. Below is the Java source code to capture images from your USB webcam using OpenCV: 

 

// OpenCVWebCamTest
// Copyright 2012-2014 Rick Fluck, Controlled Capture Systems.
import java.awt.*;  
import java.awt.image.BufferedImage;  
import javax.swing.JFrame;
import javax.swing.JPanel; 
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.VideoCapture;
public class OpenCVWebCamTest extends JPanel {  
  	private static final long serialVersionUID = 1L;  
  	private BufferedImage image;  
  
  	// Create a constructor method  
  	public OpenCVWebCamTest(){  
    	super();  
  	}  
  	private BufferedImage getimage(){  
		return image;  
  	}  
  	private void setimage(BufferedImage newimage){  
    	image = newimage;
    	return;  
  	}  
  
  	/**  
   	* Converts/writes a Mat into a BufferedImage.  
   	*  
   	* @param matrix Mat of type CV_8UC3 or CV_8UC1  
   	* @return BufferedImage of type TYPE_3BYTE_BGR or TYPE_BYTE_GRAY  
   	*/  
  	public static BufferedImage matToBufferedImage(Mat matrix) {  
    	int cols = matrix.cols();  
    	int rows = matrix.rows();  
    	int elemSize = (int)matrix.elemSize();  
    	byte[] data = new byte[cols * rows * elemSize];  
    	int type;  
    	matrix.get(0, 0, data);  
    	switch (matrix.channels()) {  
      	case 1:  
        	type = BufferedImage.TYPE_BYTE_GRAY;  
        	break;  
      	case 3:  
        	type = BufferedImage.TYPE_3BYTE_BGR;  
        	// bgr to rgb  
        	byte b;  
        	for(int i=0; i<data.length; i=i+3) {  
          		b = data[i];  
          		data[i] = data[i+2];  
          		data[i+2] = b;  
        	}  
        	break;  
      	default:  
        	return null;  
    }  
    BufferedImage image2 = new BufferedImage(cols, rows, type);  
    image2.getRaster().setDataElements(0, 0, cols, rows, data);  
    return image2;  
  	}  
  	public void paintComponent(Graphics g) {
	  	super.paintComponent(g);
	  	Graphics2D g2 = (Graphics2D) g;
	  	BufferedImage temp = getimage();
	  	//int width = temp.getWidth();
	  	//int height = temp.getHeight();
	  	g2.drawImage(temp,10,10,640,480, this);  
  	}  
  	public static void main(String arg[]){  
   		// Load the native library.  
	  	System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
	  	JFrame frame = new JFrame("BasicPanel");  
	  	frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);  
	  	frame.setSize(400,400);  
	  	OpenCVWebCamTest panel = new OpenCVWebCamTest();  
	  	frame.setContentPane(panel);       
	  	frame.setVisible(true);       
	  	Mat webcam_image = new Mat();  
	  	BufferedImage temp;  
	  	VideoCapture capture =new VideoCapture(0);  
	  	if( capture.isOpened()) {
		  	while( true ) {  
			  	capture.read(webcam_image);  
			  	if( !webcam_image.empty()) {  
				  	frame.setSize(webcam_image.width()+40,webcam_image.height()+60);  
				  	temp = matToBufferedImage(webcam_image);  
				  	panel.setimage(temp);  
				  	panel.repaint();  
			  	} else {  
				  	System.out.println(" --(!) No captured frame -- Break!");  
				  	break;  
			  	}  
		  	}  
      	}  
	  	return;  
  	}  
}  

 

We now have a USB communications class and a USB webcam image capture class, both of which can be ran as stand alone programs to test for proper operation. With these two classes, you should be able to communicate with your AndroiDAQ module using the USB communications class, and also view the live image coming from your connected USB webcam using the webcam capture class. Now we can develop a Main or viewer interface class that will utilize these two working classes. The webcam capture class will be converted and used as a controller class, which will update this new viewer class with newly captured camera images and control the USB communication class by providing input text from the viewer interface class to the USB communication class. It will also update the viewer class with the USB outputted textual data. Using a viewer class in this way allows us to utilize a Java thread for the heavy work of the controller class, so that the user’s interface, which is the viewer class, never appears locked up due to background operations like image capture, USB data retrieval and sending, and later image processing.

 

To develop your viewer class, you know that you need separate input and output text regions for USB communication with the AndroiDAQ module as described above. You also know that you need a Graphics2D object to display the live imagery from the USB webcam, however due to the way that the USB webcam image capture class was written as to extend JPanel, we only need to reference the USB webcam class to get the image into our new viewer class. We will also utilize a couple of JButtons with ActionListeners in this viewer class, which will later control our image processing sub-routines, or classes, to control the pan and tilt assembly. Below is Java source code for our viewer or Interface class to show a live image from the web camera:

 

// AndroiDAQ_Interface.java
// Copyright Controlled Capture Systems, Rick Fluck 2013-2014 
/* Portions of code are credited to Andrew Davison,  July 2013
 * Uses AndroiDAQ module and various firmware changes to control 
 * one servo and one stepper motor
 * Uses face detection and motion detection using JavaCV by Samuel Audet  
 *  https://code.google.com/p/javacv/ 
 * AndroiDAQ information at:http://www.controlcapture.com/androiddaqmod
 */
import java.awt.*;
import java.awt.event.*;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.*;
public class AndroiDAQ_Interface  implements ActionListener {
  // GUI components
  private AndroiDAQ_Cam_Controller motionPanel;
	JFrame frame;
	static JTextArea text;
	JTextField inputText;
	JScrollPane scroller;
	JButton button1;
	JButton button2;
	boolean tracking = false;
	boolean detecting = false;
	
	
  public AndroiDAQ_Interface() {
    super();
    frame = new JFrame("AndroiDAQ USB Camera Interface");
	frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
	frame.setSize(500,400); 
	frame.addWindowListener(new java.awt.event.WindowAdapter()	{
		public void windowClosing(WindowEvent winEv){
			AndroiDAQ_Comm.disconnect();
			motionPanel.closeDown();
		}
	});
	
	//Main Panel
	JPanel panel = new JPanel(new BorderLayout());
	panel.setBorder(BorderFactory.createEmptyBorder(10,10,10,10));
	panel.setBackground(Color.LIGHT_GRAY);
	
	//Button Panel
	JPanel actionPanel  = new JPanel(new GridLayout(1, 0));
	button1 = new JButton("Track Face");
	button1.addActionListener(new TrackFaceListener());
	button2 = new JButton("Motion Detect");
	button2.addActionListener(new MotionDetectListener());
	
	//Input Panel
	JPanel inputPanel = new JPanel(new BorderLayout());
	JLabel input = new JLabel("DAQ Input:");
	inputText = new JTextField(50);
	inputText.addActionListener(this);
	text = new JTextArea(10,50);
	text.setLineWrap(true);
	
	//Output Panel
	JPanel outputPanel = new JPanel(new BorderLayout());
	JLabel output = new JLabel("DAQ Output:");
	scroller = new JScrollPane (text);
	scroller.getVerticalScrollBar().addAdjustmentListener(new AdjustmentListener() {  
        public void adjustmentValueChanged(AdjustmentEvent e) {  
            e.getAdjustable().setValue(e.getAdjustable().getMaximum());  
        }
    });
	actionPanel.add(button1);
	actionPanel.add(button2);
	inputPanel.add(input, BorderLayout.NORTH);
	inputPanel.add(inputText, BorderLayout.CENTER);
	outputPanel.add(output, BorderLayout.NORTH);
	outputPanel.add(scroller, BorderLayout.CENTER);
	panel.add(actionPanel, BorderLayout.NORTH);
	panel.add(inputPanel, BorderLayout.CENTER);
	panel.add(outputPanel ,BorderLayout.SOUTH);
	motionPanel = new AndroiDAQ_Cam_Controller();
	// the sequence of pictures appear here
	motionPanel.setBorder(BorderFactory.createEmptyBorder(10,10,10,10));
	BufferedImage image = null;
    try {
        image = ImageIO.read(frame.getClass().getResource("/CCapturelogo.png"));
    } catch (IOException e) {
        e.printStackTrace();
    }
    frame.setIconImage(image);
	frame.getContentPane().add(BorderLayout.CENTER, panel);
	frame.getContentPane().add(BorderLayout.NORTH, motionPanel);
	frame.addWindowListener( new WindowAdapter() {
	    public void windowOpened( WindowEvent e ){
	    	inputText.requestFocus();
	    }
	});
	//frame.setBounds(20,20,620,800);
	frame.setLocationByPlatform(true);
	frame.pack();
	frame.setVisible(true);
  } 
  @Override
	public void actionPerformed(ActionEvent arg0) {
		String text = "0" + inputText.getText() + "\r";
		System.out.print("String: " + text + "\n");
		motionPanel.writeToPort(text);
		inputText.setText("");
	}
  // -------------------------------------------------------
	public class TrackFaceListener implements ActionListener {
		@Override
		public void actionPerformed(ActionEvent e) {
			System.out.print("TrackFaceListener: " + e);
			if (!tracking) {
				button1.setText("Tracking For Faces");
				button2.setEnabled(false);
				motionPanel.setTrackFace(true);
				tracking = true;
			} else {
				button1.setText("Track Face");
				button2.setEnabled(true);
				motionPanel.setTrackFace(false);
				tracking = false;
			}
		}
	}
	public class MotionDetectListener implements ActionListener {
		@Override
		public void actionPerformed(ActionEvent e) {
			System.out.print("MotionDetectListener: " + e);
			if (!detecting) {
				button2.setText("Detecting Motion");
				button1.setEnabled(false);
				motionPanel.setMotionDetect(true);
				motionPanel.centerPanTilt();
				detecting = true;
			} else {
				button2.setText("Motion Detect");
				button1.setEnabled(true);
				motionPanel.setMotionDetect(false);
				detecting = false;
			}
		}
	}
	public static void main( String args[] )  {  
		new AndroiDAQ_Interface();  
	}
} 

 

And here is a screen shot that shows the viewer or Interface class GUI:

 

AndroiDAQInterface

 

As stated above, we will modify the webcam image capture class to use as our heavy working controller class. This controller class will utilize a Java thread to open the USB communications class and also to acquire imagery from the webcam. You will note that in the controller class, we have a threaded run() function, where the USB communication class is loaded and where the USB webcam is initialized. Images are grabbed in a while loop, so that imagery is taken as long as the frame grabber function is open. This while loop is where all the image processing sub-routines are called and then when all image processing is done, the results are written to the interface class for user visualization, via the repaint() function. Below is the Java source code for our new controller class; please read through the code to understand the similarities between it and the webcam capture class from above:

 

//AndroiDAQ_Cam_Controller.java
// Copyright Controlled Capture Systems, Rick Fluck 2013-2014 
/* Portions of code are credited to Andrew Davison, July 2013
 * Uses AndroiDAQ module and various firmware changes to control one  
 * servo and one stepper motor
 * Uses face detection and motion detection using JavaCV by Samuel Audet 
 * AndroiDAQ information at: http://www.controlcapture.com/androiddaqmod
 */
import java.awt.BasicStroke;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Font;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.Rectangle;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.JPanel;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Rect;
import org.opencv.highgui.VideoCapture;
import org.opencv.highgui.Highgui;
public class AndroiDAQ_Cam_Controller extends JPanel implements Runnable {
	private static final long serialVersionUID = 1L;
	// dimensions of each image; the panel is the same size as the image 
	private static final int WIDTH = 640;  
	private static final int HEIGHT = 480;
	private static final int DELAY = 45;  // time (ms) between redraws of the panel
	//private static final int DETECT_DELAY = 150;   // time (ms) between each face detection
	private static final String CROSSHAIRS_FNM = "/crosshairs.png";
	private static final int MIN_MOVE_REPORT = 3;    // for reporting a move
	
	private static final int HALF_HORIZONTAL = 320; //Makes up for crosshair size
	private static final int HALF_VERTICAL = 240;  //Makes up for crosshair size
	private static final int CAMERA_ID = 0;
	private Rectangle faceRect;     // holds the coordinates of the highlighted face
	private boolean faceDetected = false;
	private volatile boolean isRunning;
	private volatile boolean isFinished;
        // used for the average ms snap time information
	private int imageCount = 0;
	private Font msgFont;
	Rect rect;
	private Point prevXYPoint = null; // holds the coordinates of the motion COG
	private Point xyPoint = null; 
	private BufferedImage crosshairs;
	private AndroiDAQ_Comm serial;
	MotionDetection md;
	FaceDetection fd;
  
	private BufferedImage image; 
	Mat webcam_image;
	int yfactor;
	int prevYfactor;
	int xStep;
	int yStep;
	long detectDuration;
	boolean trackFace = false;
	boolean motionDetect = false;
	boolean processing = false;
        Thread thread = new Thread(this);
	
  public AndroiDAQ_Cam_Controller() {
    setBackground(Color.LIGHT_GRAY);
    msgFont = new Font("SansSerif", Font.BOLD, 18);
    // load the crosshairs image (a transparent PNG)
    crosshairs = loadImage(CROSSHAIRS_FNM);
    faceRect = new Rectangle();
    thread.setDaemon(true);
    thread.start();   // start updating the panel's image
  } 
  private BufferedImage loadImage(String imFnm) {
  // return an image
    BufferedImage image = null;
    try {
      image = ImageIO.read(this.getClass().getResource(imFnm));   // read in as an image
       //System.out.println("Reading image " + imFnm);
    } catch (Exception e) {
      System.out.println("Could not read image from " + imFnm);
    }
    return image;
  }  
  
  public Dimension getPreferredSize() {
  // make the panel wide enough for an image
  return new Dimension(WIDTH, HEIGHT); }
  
  public void writeToPort(String send) {
  	//System.out.print("String to send: " + send + "\n");
   AndroiDAQ_Comm.writetoport(send);
  }
  
  public void run() {
     //display the current webcam image every DELAY ms
     //The time statistics gathered here include the time taken to
     //detect movement.
  	 
	  VideoCapture grabber = initGrabber(CAMERA_ID);
	  webcam_image = new Mat();  
	  if (grabber == null)
		  return;
	    
	  Point pt;
	  serial = new AndroiDAQ_Comm();
	  //long duration;
	  isRunning = true;
	  isFinished = false;
	  BufferedImage temp; 
	  if(grabber.isOpened()){
		  while (isRunning) {
			  //long startTime = System.currentTimeMillis();
			  if (!processing) {
				  grabber.read(webcam_image);
				  temp = matToBufferedImage(webcam_image);
				  this.setImage(temp);  // update detector with new image
			  }
			  if (getTrackFace()) {
				  
				  if ((pt = fd.detect(webcam_image)) != null) {    // get new COG
					  	processing = true;
					  	prevXYPoint = xyPoint; 
	    	        	xyPoint = pt;
	    	        	//System.out.println("FaceCenterPoint =: " + cogPoint);
	    	        	reportXYChanges(xyPoint, prevXYPoint);
				  }
			  }
			  if (getMotionDetect()) {
				  
				  md.calcMove(webcam_image);    // update detector with new image
				  if ((pt = md.getCOG()) != null) {    // get new COG
					  processing = true;
					  prevXYPoint = xyPoint; 
					  xyPoint = pt;
					  reportXYChanges(xyPoint, prevXYPoint);
				  } 
			  }
			  //System.out.println("imageCount =: " + imageCount);
			  if (imageCount == 1) {
				  md = new MotionDetection(webcam_image);
				  fd = new FaceDetection();
			  }
			  imageCount++;
			  repaint();
			  //duration = System.currentTimeMillis() - startTime;
		  }
	  }
  closeGrabber(grabber, CAMERA_ID);
  System.out.println("Execution terminated");
  isFinished = true;
  }  
  private BufferedImage getImage(){  
	  return image;  
  }  
  
  private void setImage(BufferedImage newimage){  
    image = newimage;
    return;  
  } 
  
  public static BufferedImage matToBufferedImage(Mat matrix) {
	  int cols = matrix.cols();  
	  int rows = matrix.rows();  
	  int elemSize = (int)matrix.elemSize();  
	  byte[] data = new byte[cols * rows * elemSize];  
	  int type;  
	  matrix.get(0, 0, data);  
	  switch (matrix.channels()) {
	  	case 1:  
	  		type = BufferedImage.TYPE_BYTE_GRAY;  
	        break;  
	  	case 3:  
	        type = BufferedImage.TYPE_3BYTE_BGR;  
	        // bgr to rgb  
	        byte b;  
	        for(int i=0; i<data.length; i=i+3) {
	        	b = data[i];  
	        	data[i] = data[i+2];  
	        	data[i+2] = b;  
	        }  
	        break;  
	  		default:  
	  		return null;  
	  }  
	  BufferedImage image2 = new BufferedImage(cols, rows, type);  
	  image2.getRaster().setDataElements(0, 0, cols, rows, data);  
	  return image2;  
  } 
  
  private VideoCapture initGrabber(int ID) {
	  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
	  
	  VideoCapture grabber =new VideoCapture(0); 
	  try {
		  //grabber = FrameGrabber.createDefault(ID);
		  //grabber = new FrameGrabber(ID);
		  //grabber = new OpenCVFrameGrabber("");
		  //grabber.setFormat("dshow");       // using DirectShow
		  grabber.set(Highgui.CV_CAP_PROP_FRAME_WIDTH , WIDTH);    
		  // default is too small: 320x240
		  grabber.set(Highgui.CV_CAP_PROP_GIGA_FRAME_SENS_HEIGH, HEIGHT);
		  //grabber..start();					  prevYfactor = yfactor;
	  } catch(Exception e) {
   		System.out.println("Could not start grabber");  
   		System.out.println(e);
   		System.exit(1);
	  }
	  return grabber;
  } 
  private void closeGrabber(VideoCapture grabber, int ID)  {
    try {
    	isRunning = false;
    	grabber.release();
    } catch(Exception e) {
    	System.out.println("Problem stopping grabber " + ID);  }
  }  
  
  public void setTrackFace(boolean setting) {
	  trackFace = setting;
  }
  
  public boolean getTrackFace() {
	return trackFace;
  }
  
  public void setMotionDetect(boolean setting) {
	  motionDetect = setting;
  }
  
  public boolean getMotionDetect() {
	return motionDetect;
  }
  
  public void centerPanTilt() {
	  String text = "013\r";
	  serial.writetoport(text);
  }
  
  private void reportXYChanges(Point xyPoint, Point prevXYPoint) {
  // compare cogPoint and prevCogPoint
	  if (prevXYPoint == null)
      return;
    
	  //For face detection, camera is mounted on pan tilt 
	  //assembly to center detected face
	  //if (faceDetected) {
	  if (fd.faceDetected()) {
		  if (xyPoint.x < HALF_HORIZONTAL) {
			  //xStep = (int) (Math.round(HALF_HORIZONTAL - cogPoint.x)/28);   
			  xStep = (int) (Math.round(HALF_HORIZONTAL - xyPoint.x)/57.6);   
			  //System.out.println("Stepper Left by: " + Math.abs(xStep));
			  if (xStep != 0) {
				  String text = "015\r" + "0" + Math.abs(xStep) + "\r" + "00\r";
				  //System.out.println("String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  }
			  xStep = 0;
		  }
		  if (xyPoint.x > HALF_HORIZONTAL) {
			  //xStep = (int) (Math.round(cogPoint.x - HALF_HORIZONTAL)/28);   
			  xStep = (int) (Math.round(xyPoint.x - HALF_HORIZONTAL)/57.6); 
			  //System.out.println("Stepper Right by: " + Math.abs(xStep));
			  if (xStep != 0) {
				  String text = "015\r" + "0" + Math.abs(xStep) + "\r" + "01\r";
				  // System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  }
			  xStep = 0;
		  }
		  if (xyPoint.y > HALF_VERTICAL) {
			  yStep = (int) (Math.round(xyPoint.y - HALF_VERTICAL)/2.8);     
			  // so + y-axis is up screen
			  //int yfactor = 1500 + ((cogPoint.y * 2) - 480);
			  yfactor = 1500 - yStep;
			  System.out.println("Calc mS for Servo: " + yfactor);
			  if (yfactor != prevYfactor) {
				  prevYfactor = yfactor;
				  
				  String text = "014\r" + "0" + yfactor + "\r";
				  //System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  }
		  } 
		  if (xyPoint.y < HALF_VERTICAL){
			  yStep = (int) (Math.round(HALF_VERTICAL - xyPoint.y)/2.8);    
			  // so + y-axis is up screen
			  //int yfactor =  1500 - (480 - (cogPoint.y * 2));
			  yfactor =  1500 + yStep;
			  System.out.println("Calc mS for Servo: " + yfactor);
			  if (yfactor != prevYfactor) {
				  prevYfactor = yfactor;
				  
				  String text = "014\r" + "0" + yfactor + "\r";
				  //System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  }
		  }
		  processing = false;
	  }
	  if (motionDetect) {
		  // calculate the distance moved and convert to steps needed
		  int xStep = (int) (Math.round(xyPoint.x - prevXYPoint.x));  
		  int yStep = -1 *(xyPoint.y - prevXYPoint.y);      // so + y-axis is up screen
		  //System.out.println("xStep, yStep: (" + xStep + ", " +yStep + ")");
		  //System.out.println("yStep: " + yStep);
		  //System.out.println("COGY: " + cogPoint.y);
		  //System.out.println("COGX: " + cogPoint.x);
		  int distMoved = (int) Math.round( Math.sqrt( (xStep*xStep) + (yStep*yStep)) );
		  //System.out.println("xStep: " + xStep);
		  Math.round( Math.toDegrees( Math.atan2(yStep, xStep)) );
        
		  // for motion detection, camera needs to be removed from pan tilt 
		  //assembly as it will be an aimer for camera view
		  if (distMoved > MIN_MOVE_REPORT) {
			  if (xStep < 0) {
				  int xfactor = (int) (Math.round(HALF_HORIZONTAL - xyPoint.x) /57.6);  
				  //System.out.println("  Calc steps Stepper right: " + xfactor);
				  String text = "015\r" + "0" + Math.abs(xfactor) + "\r" + "00\r";
				  //System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  } 
			  if (xStep > 0) {
				  int xfactor = (int) (Math.round(xyPoint.x - HALF_HORIZONTAL) /57.6); 
				  //System.out.println("  Calc steps Stepper left: " + xfactor);
				  String text = "015\r" + "0" + Math.abs(xfactor) + "\r" + "01\r";
				  // System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  }
         
			  if (xyPoint.y < 240) {
				  int xfactor = (int) (1500 - (((xyPoint.y * 2) - 480) /2.8));
				  //System.out.println("  Calc mS for Servo: " + xfactor);
				  String text = "014\r" + "0" +xfactor + "\r";
				  //System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  } 
			  if (xyPoint.y > 240) {
				  int xfactor =  (int) (1500 + ((480 - (xyPoint.y * 2))/2.8));
				  //System.out.println("  Calc mS for Servo: " + xfactor);
				  String text = "014\r" + "0" + xfactor + "\r";
				  // System.out.println("  String to be sent: " + text);
				  serial.writetoport(text);
				  try {
					  Thread.sleep(15);  // wait until DELAY time has passed
				  } catch (Exception ex) {
				  }
			  } 
		  }
		  processing = false;
	  }
  } 
  public void paintComponent(Graphics g) {
	  // Draw the image, the rectangle (and crosshairs) around a detected
	  // face, and the average ms snap time at the bottom left of the panel. 
	  super.paintComponent(g);
	  Graphics2D g2 = (Graphics2D) g;
	  g2.setFont(msgFont);
	  BufferedImage temp = getImage(); 
	  // draw the image, stats, and detection rectangle
	  if (temp != null) {
		  g2.setColor(Color.YELLOW);
		  g2.drawImage(temp,10,10,temp.getWidth()- 20,temp.getHeight() - 20, this);   
		  // draw the snap
		  //String statsMsg = String.format("Snap Avg. Time:  %.1f ms",
                                        //((double) totalTime / imageCount));
		  //g2.drawString(statsMsg, 10, HEIGHT-15);  
                        // write statistics in bottom-left corner
		  if (getTrackFace()) {
			  //drawRect(g2); //called to draw crosshairs as well.
			  if (xyPoint != null)
				  drawCrosshairs2(g, xyPoint.x, xyPoint.y);   // positioned at COG
		  }
		  if (!getTrackFace()) {
			  removeCrosshairs(g2);
		  }
		  if (getMotionDetect()) {
			  if (xyPoint != null)
				  drawCrosshairs2(g, xyPoint.x, xyPoint.y);   // positioned at COG
		  }
		  if (!getMotionDetect()) {
			  removeCrosshairs2(g);
		  }
	  } else {  // no image yet
		  g2.setColor(Color.BLUE);
		  g2.drawString("Loading from camera " + CAMERA_ID + "...", 5, HEIGHT-10);
	  }
  } 
  @SuppressWarnings("unused")
private void drawRect(Graphics2D g2) {
	  //use the face rectangle to draw a yellow rectangle around the face, with 
	  //crosshairs at its center.
	  // The drawing of faceRect is in a synchronized block since it may be being
	  // updated or used for image saving at the same time in other threads.
    
	  synchronized(faceRect) {
		  if (faceRect.width == 0) {
			  return;
		  }
		  // draw a thick yellow rectangle
		  g2.setColor(Color.YELLOW);
		  g2.setStroke(new BasicStroke(6)); 
		  //g2.drawRect(faceRect.x, faceRect.y, faceRect.width, faceRect.height);
		  
		  int xCenter = faceRect.x + faceRect.width/2;
		  int yCenter = faceRect.y + faceRect.height/2;
		  prevXYPoint = xyPoint; 
		  xyPoint = new Point(xCenter, yCenter) ;
		  //System.out.println("cogPoint " + cogPoint);
		  if (faceDetected) {
			  drawCrosshairs(g2, xCenter, yCenter);
		  } else {
			  removeCrosshairs(g2);
		  }
	  }
  }  
  private void drawCrosshairs(Graphics2D g2, int xCenter, int yCenter) {
	  // draw crosshairs graphic or a red circle
	  if (crosshairs != null) {
		  g2.drawImage(crosshairs, xCenter - crosshairs.getWidth()/2, 
                              yCenter - crosshairs.getHeight()/2, this);
	  } else {    
		  g2.setColor(Color.RED);
		  g2.fillOval(xCenter-10, yCenter-10, 20, 20);
	  }
  } 
  
  private void drawCrosshairs2(Graphics g, int xCenter, int yCenter) {
	  // draw crosshairs graphic or a red circle
 
	  if (crosshairs != null) {
		  g.drawImage(crosshairs, xCenter - crosshairs.getWidth()/2, 
                              yCenter - crosshairs.getHeight()/2, this);
	  } else {    
		  g.setColor(Color.RED);
		  g.fillOval(xCenter-10, yCenter-10, 20, 20);
	  }
  }  
  
  private void removeCrosshairs(Graphics2D g2) {
	  // draw crosshairs graphic or a red circle
	  //System.out.println("Removing Crosshairs");
	  g2.drawImage(null, 0, 0, this);
  } 
  private void removeCrosshairs2(Graphics g) {
	  // draw crosshairs graphic or a red circle
	  //System.out.println("Removing Crosshairs");
	  g.drawImage(null, 0, 0, this);
  }   
  public void closeDown() {
    //Terminate run() and wait for it to finish.
    //This stops the application from exiting until everything
    //has finished. 
   
    isRunning = false;
    while (!isFinished) {
      try {
        Thread.sleep(DELAY);
      } 
      catch (Exception ex) {}
    }
    //serial.disconnect();
  } 
} 

 

You will notice that in the controller class there is a new function called reportXYChanges(). This function correlates the image matrix data from the webcam to real world measurement data for the pan and tilt assembly. We have utilized several conditional statements in the while loop as to control the pan and tilt assembly with either face tracking or motion detection, each of which controls the pan and tilt assembly differently. The face tracking class uses a webcam that is mounted to the pan and tilt assembly and tries to center the face found in the captured image frame, where the motion detection uses a webcam in a stationary fashion, say next to or in front of the pan and tilt assembly, and then aims the pan and tilt assembly, with perhaps a Nerf pistol mounted to it, toward the object that is creating the motion in the image. Here in both cases you may notice that we have used an external image, crosshairs.png, which is an image of a red target symbol. This image is not provided, as it is easy to draw up your own. It should be saved in your project "src" folder in Eclipse so that the Java Runtime can find it on execution. This target symbol is drawn as an overlay on the captured images to show the user where the detected face or detected motion center of gravity was found using the center of gravity (COG) x and y points.

 

So how does the mathematics of the reportXYChanges() function work? For face tracking, one needs to correlate where the face was detected in the image captured and then use the known camera resolution center points of the image to offset the pan and tilt assembly x amount of degrees up or down for the servo motor, and x amount of steps left or right for the stepper motor. For example, the Logitech QuickCam Pro 5000 that we are using in our lab has a stated horizontal field of view angle of 54 degrees, which correlates to a 37 degree vertical field of view angle, and it also states that the camera sports a 640 by 480 pixel imaging chip. This correlates to 11.85 pixels per degree (640/54 = 11.85) for the field of view horizontally and 12.97 pixels per degree for the field of view vertically (480/37 = 12.97). We also know from our past articles that the stepper motor that we are using in the lab has 1.875 degrees per step or 192 steps per revolution, though if it is in half-step mode, like our setup in the lab, this value is increased to 384 steps per revolution or 0.9375 degrees per step. This means to move the pan and tilt assembly the whole of the camera’s 54 degree field of view it would take 57.6 steps for the stepper motor in half step mode (54/0.9375 = 57.6 steps).

 

So theoretically, if a face is detected at 280 pixels horizontally in the image and we want the face to be centered in the next image, we need to move the pan and tilt assembly 40 pixels (320 center of image minus 280 face in current frame = 40 pixels), which represents 3.37 field of view degrees (40/11.85 = 3.37 degrees) or approximately 4 half steps (3.37/0.9375 = 3.6 half steps). We found this to be a very course in accuracy, as the pan and tilt head kept hunting for the face on the next image iteration, so we utilized a method, as shown in the code, using the absolute value of the detection location minus the center pixel value of 320, which is then divided by the 57.6 steps number for the 54 degree field of view. This proved more limiting and prevented the hunting of the pan and tilt due to the smaller movement steps. So for the example 320-280 = 40, and then 40/57.6 is rounded to 1 step, then the processing rechecks again for baby step movements to center the face detected in the image.

 

For the servo motor, which moves vertically on the pan and tilt head, we need to derive its minimum and maximum angular degree of movement and then find a conversion factor for the millisecond encoding that is sent to move the servo. Our lab servo motor has a minimum angular value of 1038 mSecs and a maximum angular value of 1952 mSecs, which represents 110 degrees of total up and down movement. Using these figures we can derive a conversion factor. Here we will take the full angular timings, 1952 – 1038 = 914 mSecs, and divide that number by the total degrees of movement (914/110 = 8.3 mSecs/degree). The 8.3 figure is our conversion factor. Keep in mind that servo motors are centered by sending a value of 1500 mSecs.

 

So theoretically, if the face appeared at 180 pixels vertically in the image, we would derive the amount that we need to move the servo by using the following method: 240 -the center value of the vertical of the image minus 180 -where the face is detected = 60 pixels. Then we divide the 60 pixels value by the vertical field of view value of 12.87 which equals 4.66 degrees.  Then we multiply the 4.66 degrees by our conversion factor 8.3 and get 38.7 mSecs, so correlating this to the servo center value of 1500, we would add or subtract, depending if we are going up or down, from the 1500 mSecs center servo value; so since we are going down to center the face in the next image frame, we substrate our 38.7 mSecs from 1500 to get 1462 mSecs, which is sent to the servo motor. We also found that this method was very course in accuracy and that the pan and tilt head kept hunting for the face in the next image iteration, so in the code you will see that we add or subtract, depending if we are going up or down, the center of the vertical field of view 240 from the vertical pixel location that the face was detected, and then divide this number by 2.8. So for the example, 240 – 180 = 60, 60/2.8, gives us a value of 21.42, which is then added or subtracted from the 1500 mSecs centering value for the servo motor, depending if we are going up or down. We found this to be more restrictive, which prevented the servo motor from hunting. 

 

Motion tracking is done much in the same way in the reportXYChanges() function. Reading the source code will help you to discover how it is done and what the differences are between it and facial detection. Keep in mind for motion detection, the camera is mounted stationary from the pan and tilt assembly, to allow the pan and tilt assembly to move its head around in the field of view of the camera, otherwise when the pan and tilt assembly moved, which means that the camera also moved, so everything appears in motion to the imagery.

 

The last two pieces of source code for this article deal with the face and motion detection classes. Both classes provide a Java Point, which is a representative location, in (x,y) coordinate space, and is specified as an integer in precision. These points are calculated by the FaceDetection class and also by MotionDetection class, using simple OpenCV algorithms in each class. These two points are then used by the reportXYChanges() to calculate the steps for the stepper motor and the servo motor angle for the servo, to move the pan and tilt assembly as a whole, via the JSSC writetoport() function. This is done by using the AndroiDAQ menu items 14 and 15 and adding the appropriate step and angle values for moving the stepper and servo motors. Now let’s review the FaceDetection class whose Java source code is shown below:

 

Here, when the controller class calls the detect(Mat inputframe) function from its while loop, the FaceDection class converts the matrix image received from the controller class into a gray scale matrix image. It then runs a face-cascade classifier on the image to see if it can find a face and if a face is found, calculate the center of gravity (COG) of the detected face and return those two calculated points to the controller class. These points are then used to move the pan and tilt assembly. Otherwise if a face is not found by the cascade classifier, the FaceDetection class returns a null. You can read more about facial detection and cascade classification on the OpenCV tutorial website here: 

 

http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html

 

// FaceDetection.java
// Copyright 2013 Rick Fluck
import java.awt.Point;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Rect;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
public class FaceDetection {
	
	private CascadeClassifier face_cascade; 
	boolean faceDetected;
	// Create a constructor method  
	public FaceDetection() {
		face_cascade = new CascadeClassifier("resources/lbpcascade_frontalface.xml");  
		if(face_cascade.empty()) {
			System.out.println("--(!)Error loading A\n");  
			return;  
		} else {
			System.out.println("Face classifier loaded");  
		}  
	}  
	public Point detect(Mat inputframe){
		Point center = null;
		Mat mRgba=new Mat();  
		Mat mGrey=new Mat();  
		MatOfRect faces = new MatOfRect();  
		inputframe.copyTo(mRgba);  
		inputframe.copyTo(mGrey);  
		Imgproc.cvtColor( mRgba, mGrey, Imgproc.COLOR_BGR2GRAY);  
		Imgproc.equalizeHist( mGrey, mGrey );  
		face_cascade.detectMultiScale(mGrey, faces);
		int total = faces.toArray().length;
		if (total == 0) {
			 //System.out.println("No faces found");
			 faceDetected = false;
			 return null;
		 } else if (total > 1)  { // this case should not happen, but included for safety
			 //System.out.println("Multiple faces detected (" + total + "); using the first");
			 faceDetected = true;
		 } else {
			 //System.out.println("Face detected");
			 faceDetected = true;
		 }
		//System.out.println(String.format("Detected %s faces", faces.toArray().length));  
		for(Rect rect:faces.toArray()) {
			int xPoint = (int) (rect.x + rect.width*0.5);
			int yPoint = (int) (rect.y + rect.height*0.5);
			center = new Point(xPoint, yPoint);  
			//System.out.println("Points center = " + xPoint + "," + yPoint); 
	       //Core.ellipse( mRgba, center, new Size( rect.width*0.5, rect.height*0.5), 
		   //0, 0, 360, new Scalar( 255, 0, 255 ), 4, 8, 0 );  
		}  
		return center;  
	} 
	
	public boolean faceDetected() {
		
		return faceDetected;		
	}  
}

 

Below is the MotionDection class Java source code:

Here, when the controller class calls the calcMove(Mat currFrame) function from its while loop, the MotionDetection class converts the received matrix image into a gray scale matrix image and then compares it to the last acquired image, which was saved to memory, to calculate if motion is detected between the images. This is called image differentiating of consecutive video frames. If motion is detected between the two images, the center of gravity of the detected motion is calculated and the class returns these two points, otherwise a null is returned. You can read more about motion detection methods on the OpenCV tutorial website here:

 

http://docs.opencv.org/modules/imgproc/doc/motion_analysis_and_object_tracking.html

 

Andrew Davison also describes motion detection and various methods to implement motion. You can read his fine paper here:

 

http://fivedots.coe.psu.ac.th/~ad/jg/nui03/motion_v4.pdf

 

// MotionDetection.java
// Copyright 2013 Rick Fluck
import java.awt.Dimension;
import java.awt.Point;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.imgproc.Moments;
public class MotionDetection {
  private static final int MIN_PIXELS = 100;
       // minimum number of non-black pixels needed for COG calculation
  private static final int LOW_THRESHOLD = 64;
  private static final int MAX_PTS = 5;
  private Mat prevImg, currImg, diffImg;     // grayscale images (diffImg is bi-level)
  private Dimension imDim = null;    // image dimensions
  private Point[] cogPoints;   // array for smoothing COG points
  private int ptIdx, totalPts;
  
  public MotionDetection(Mat firstFrame) {
    if (firstFrame.empty()) {
      System.out.println("No frame to initialize motion detector");
      System.exit(1);
    }
    System.out.println("Initializing OpenCV motion detector...");
    imDim = new Dimension( firstFrame.width(), firstFrame.height() );
    cogPoints = new Point[MAX_PTS];
    ptIdx = 0;
    totalPts = 0;
    prevImg = convertFrame(firstFrame);
    currImg = null;
    diffImg = new Mat();  
  } 
  public void calcMove(Mat currFrame) {
	 // use a new image to create a new COG point
  
	  if (currFrame == null) {
		  System.out.println("Current frame is null");
		  return;
	  }
	  if (currImg != null)  // store old current as the previous image
		  prevImg = currImg;
	  currImg = convertFrame(currFrame);
         // calculate absolute difference between currFrame & previous image;
         // large value means movement; small value means no movement
	  Core.absdiff(currImg, prevImg, diffImg);
           
	  Imgproc.threshold(diffImg, diffImg, LOW_THRESHOLD, 255, Imgproc.THRESH_BINARY);
	  Point cogPoint = findCOG(diffImg);
	  if (cogPoint != null) {    // store in points array
		  cogPoints[ptIdx] = cogPoint;
		  ptIdx = (ptIdx+1)%MAX_PTS;   // the index cycles around the array
		  if (totalPts < MAX_PTS) totalPts++;
	  }
  } 
  public Mat getCurrImg() {  
	  return currImg;  
  }
  public Mat getDiffImg() {  
	  return diffImg;  
  }
  public Dimension getSize() {  
	  return imDim;  
  }
  private Mat convertFrame(Mat img) {
  /* Conversion involves: blurring, converting color to grayscale, and equalization */
	  Mat mRgba = new Mat();
	  Mat grayImg = new Mat();
	  img.copyTo(mRgba); 
	  img.copyTo(grayImg);
	// convert to gray scale
	  Imgproc.cvtColor(mRgba, grayImg, Imgproc.COLOR_BGR2GRAY); 
	// blur image to get reduce camera noise
	  Imgproc.GaussianBlur(grayImg, grayImg, new Size(9,9),0,0); 
	// spread out the gray scale range       
	  Imgproc.equalizeHist( grayImg, grayImg );
	  return grayImg;
  }
  private Point findCOG(Mat diffImg) {
  /*  If there are enough non-black pixels in the difference image
      (non-black means a difference, i.e. movement), then calculate the moments,
      and use them to calculate the (x,y) center of the white areas.
      These values are returned as a Point object. */
  
    Point pt = null;
    int numPixels = Core.countNonZero(diffImg);   // non-zero (non-black) means motion
    if (numPixels > MIN_PIXELS) {
     
      Moments moments = new Moments();
      moments = Imgproc.moments(diffImg, true);    // 1 == treat image as binary (0,255) --> (0,1)
      double m00 = moments.get_m00();
      double m10 = moments.get_m10();;
      double m01 = moments.get_m01();
      if (m00 != 0) {   // create XY Point
        int xCenter = (int) Math.round(m10/m00);
        int yCenter = (int) Math.round(m01/m00);
        pt = new Point(xCenter, yCenter);
      }
    }
    return pt;
  }  
  
  public Point getCOG() {
  /* return average of points stored in cogPoints[],
     to smooth the position */
	  if (totalPts == 0)
      return null;
	  int xTot = 0;
	  int yTot = 0;
	  for(int i=0; i < totalPts; i++) {
		  xTot += cogPoints[i].x;
		  yTot += cogPoints[i].y;
    }
    return new Point( (int)(xTot/totalPts), (int)(yTot/totalPts));
  }
} 

 

So there you have it; you should now know that by using only five fairly straight forward and simple Java classes, the viewer or interface class, the controller class -which also doubles as a webcam image acquisition class, the USB communications class, the motion detection and the facial detection classes, that you can move your pan and tilt assembly using the AndroiDAQ module and with a webcam that is connected to your laptop or desktop computer. As you can see, the movement of the pan and tilt assembly is fairly crude and not really very accurate, but we will leave that fine tuning to you. After all, millions of dollars are spent by the military and other agencies for motion control projects such as this one, to create similar targeting machines. Really the only difference is the math that they use, which is a lot more complicated than what has been presented here in this article. At the beginning of this article we show a sneak preview of our newest AndroiDAQ robot. Stay tuned for some fun videos that we will make as progress continues on our AndroiDAQ robot.