The constraints on the robot operating space reduce the risk of contact between human and robot outside the safe operation region in which the robot is restricted. During collaboration, however, the human can enter the robot’s operating space, either intentionally (through actions like placing the meat on the table) or unintentionally. In these instances, we need additional safety precautions to prevent undesirable contacts between the human and the robot. Toward this need, we demonstrate how an instrumented knife can be used to detect contact, and use preliminary tests conducted while executing the example meat processing tasks to inform guidelines for more comprehensive safety protocol development. In this initial execution of the instrumented knife framework, we hypothesized that contact with meat could be determined by a proximity sensor and an inertial measurement unit (IMU). Further, we hypothesized that due to the unique nuances of the different cutting tasks, unique contact detection approaches would be needed for each task, but that these approaches could be generalized across individual pieces of meat. We collect data from a human using a sensor-equipped knife while performing three meat processing tasks (slicing, trimming, and cubing) to determine the accuracy of using the sensors for contact determination. Although beyond the scope of this experiment, determining that the knife is in contact with meat is a critical step toward a broader safety protocol which could use this contact detection along with visual and control inputs to explore whether the knife should be in contact with an object, and what kind of object the knife is in contact with. Here, we present the proof of concept that an instrumented knife can provide valuable feedback focused on contact detection for this eventual safety system.
Design of instrumented knife The knife was instrumented with a SparkFun ESP32-S2 Thing Plus (SparkFun Electronics, Niwot, CO) microprocessor, a SparkFun 9 Degrees of Freedom (DoF) IMU Breakout, and a SparkFun Proximity Sensor Breakout using a hook and loop attachment for easy repositioning and cleaning. The IMU was selected as a candidate sensor because33 leveraged accelerometry as a tool for collision detection. Similarly34, were successful in using time-of-flight proximity sensors for collision detection, thus we also selected proximity sensing as another candidate sensor. To accommodate ease-of-use and maximize logical placement of these sensors, the microprocessor and IMU were placed on either side of the knife handle, just below where the handle is gripped. The proximity sensor was placed below the handle, perpendicular to the knife, to allow it to aim along the knife blade toward the object being cut. These positions, as well as the selection of sensors were confirmed based on a preliminary test of reliability. In this test, the sensor readings were collected at 100 hz while a human repeatedly cut into a block of butter. While the knife was in contact with the butter, the human pressed a button to code ground truth data representing contact. An initial test of the use of these sensor data to classify that contact yielded error rates (data not shown). Based on the initial success of this instrumentation system confirmation exercise, we progressed to demonstrate the system in a meat processing context.
Data preparation To determine whether this knife instrumentation system would be able to classify when the knife is in contact with meat, we conducted an experiment using the knife to perform the three target meat processing tasks (slicing, trimming, and cubing) on two pork loins. The experiment resulted in 23, 26, and 27 replicates of slicing, trimming, and cubing. The differences in replicate number are due to some slicing actions not being recorded, differences in fat content among some slices (i.e., some slices did not need trimming), as well as the fewer number of slicing actions needed to create trimmable and cubeable slices. Based on success in the preliminary test with butter, the microprocessor controlling the instrumentation system was programmed using Arduino IDE to collect and log data from the proximity and IMU sensors at 100 hz. This data collection resulted in 10 features (i.e., independent variables) for use in training the contact detection algorithm. These features included the proximity reading, as well as the x, y, and z axis readings of the accelerometer, magnetometer, and gyroscope. Ground truth measurements indicating when the knife was in contact with the meat were determined by the human operating the knife. When the human felt the knife come into contact with the meat, they pressed a button on the microprocessor. The button was continuously pressed during the entire time the knife was in contact with the meat. The microprocessor was coded such that this binary response variable (i.e., 1 if pressed, 0 otherwise) was logged with the 10 associated sensor measurements. The average replicate resulted in 1462 observations, of which represented contact with the meat. The data were transferred in real-time from the microprocessor to local storage via universal serial bus (USB). Prior to analysis, each reading from each replicate was centered and standardized, and values exceeding 5 standard deviations of the mean were omitted from analysis as presumed sensor errors.
Because the ground truth observations were determined by when the human pressed the button, there was opportunity for human error. To minimize this, we visually confirmed that the human did not accidentally let go of the button during the cutting action by evaluating the consistency and duration of the indicator for contact in each cutting action. There will be residual human error associated with imperfect identification of the exact millisecond when the knife came into or exited contact with the meat; however, for the purposes of this proof-of-concept exercise, that error in ground truth coding was deemed acceptable. In future work exploring the refinement of this system for use in a broader safety protocol, high-speed imagery will be needed to confirm ground truth more precisely.
Data analysis To explore the accuracy with which this prototype knife instrumentation system could be used to classify whether the knife was in contact with or approaching the meat, we trained a random forest classification algorithm (RF) using the randomForest package37 of R v 4.2.1 (R Core Team, 2022). The target response to be classified was the binary indicator representing contact with the meat, and the features or independent variables used by the RF were the 10 sensor readings. The RF is a supervised machine learning algorithm that is used to classify data by bootstrapping samples from the original data, building decision tees for each sample, and averaging the predictions from those trees in an ensemble to generate a final estimated outcome. The RF tends to be more robust than other classification approaches, with simple hyperparameter tuning and high prediction accuracy38. To derive our RF, we split the data from each cutting task into 2 subsets, with of the observations used for training, and used for independent evaluation of classification accuracy. The used for training was also used to tune the model parameters using the tuneRF function of the randomForest package. Based on this tuning, we bootstrapped 500 samples from the training dataset, building 500 trees with 4 to 6 variables tried at each split. The resulting tuned RF was then evaluated against the of held-out data to determine the number of true and false positives and negatives, as well as the overall error rate. The error rate was calculated as the number of false positives and false negatives divided by the total number of observations.
To better understand the generalizability of the knife instrumentation system, we explored this training and testing strategy applied to three different data structures. In the first data structure (Superficial, within cut type; SWT), we combined all data from individual replicates within a cutting task into a single dataset for each cutting task. These data were then split 60/40 as described above to benchmark the accuracy of the system when an individual algorithm is trained for each type of cut. This was a superficial split, meaning that all data were considered equally during the splitting into training and testing sets, without explicitly accounting for grouping factors like replicate. In the second data structure (Superficial, Across Types; SAT) we sought to explore how an algorithm could generalize across cut types. In this structure, we combined all data from the three cutting tasks, and split this combined data 60/40 for training/testing, as described above. Again, this represented superficial splitting as replicates were not considered as a grouping factor when determining the data splits. In the third data structure (By Replicate, Within Type; RWT) we explored the impact of training and testing within individual cuts of meat. We trained the RF using the cut-specific datasets, but split such that of the replicates were used for training and of the replicates were used for testing. This meant that during testing, some replicates would reflect entirely “new” pieces of meat, which would be more representative of a real-world context where contact on new pieces of meat would need to be determined without prior opportunity to learn on data from that specific cut. Another series of random forest regressions was then applied to each of these data structures to evaluate the ability of the sensors to predict when an object was approaching. In this analysis, the same 60/40 training/testing split was used to predict an incoming object in the 10-100 milliseconds prior to contact.
In a real-world application of a full-scale safety control system, an acceptable error rate would be . However, given that this instrumented knife is only one element of what could be incorporated into such a system, and we had potential for human error in pressing the contact button for determining ground truth in this proof-of-concept, we set a target error rate of . Time-of-flight proximity sensors such as the one employed in this experiment have accuracy and precision generally estimated as of the distance from the object. Based on the length of the instrumented knife, the expected precision was 1.5 mm. The accuracy and precision of the IMU was expected to be heading accuracy for the magnetometer, sensitivity for the gyroscope, and sensitivity for the accelerometer. Based on these hardware specifications, we expected that the random forest approach would support high fidelity detection of contact.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.